Test Report: Docker_macOS 14420

                    
                      7d3b93abdd89ce8ebba3c81494e660414100c7c4:2022-06-29:24669
                    
                

Test fail (22/289)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (255.91s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220629110235-24356 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0629 11:03:42.319707   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:05:58.447185   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:06:07.682413   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:07.688902   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:07.699420   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:07.721589   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:07.762945   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:07.845267   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:08.007501   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:08.329849   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:08.970256   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:10.252475   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:12.814760   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:17.935043   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:26.161830   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:06:28.176594   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:06:48.657119   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220629110235-24356 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m15.884950947s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220629110235-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220629110235-24356 in cluster ingress-addon-legacy-20220629110235-24356
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:02:35.840550   27428 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:02:35.840715   27428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:02:35.840720   27428 out.go:309] Setting ErrFile to fd 2...
	I0629 11:02:35.840724   27428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:02:35.841041   27428 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:02:35.841358   27428 out.go:303] Setting JSON to false
	I0629 11:02:35.856821   27428 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7323,"bootTime":1656518432,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:02:35.857006   27428 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:02:35.900419   27428 out.go:177] * [ingress-addon-legacy-20220629110235-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:02:35.922609   27428 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:02:35.922564   27428 notify.go:193] Checking for updates...
	I0629 11:02:35.944348   27428 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:02:35.966254   27428 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:02:36.008227   27428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:02:36.050408   27428 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:02:36.072596   27428 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:02:36.141648   27428 docker.go:137] docker version: linux-20.10.16
	I0629 11:02:36.141793   27428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:02:36.264200   27428 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-29 18:02:36.206202477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:02:36.308022   27428 out.go:177] * Using the docker driver based on user configuration
	I0629 11:02:36.329909   27428 start.go:284] selected driver: docker
	I0629 11:02:36.329935   27428 start.go:808] validating driver "docker" against <nil>
	I0629 11:02:36.329960   27428 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:02:36.333515   27428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:02:36.454854   27428 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-29 18:02:36.397361071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:02:36.455006   27428 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 11:02:36.455203   27428 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:02:36.477186   27428 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 11:02:36.498883   27428 cni.go:95] Creating CNI manager for ""
	I0629 11:02:36.498916   27428 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:02:36.498930   27428 start_flags.go:310] config:
	{Name:ingress-addon-legacy-20220629110235-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220629110235-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:02:36.520750   27428 out.go:177] * Starting control plane node ingress-addon-legacy-20220629110235-24356 in cluster ingress-addon-legacy-20220629110235-24356
	I0629 11:02:36.542796   27428 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:02:36.564611   27428 out.go:177] * Pulling base image ...
	I0629 11:02:36.606926   27428 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0629 11:02:36.606930   27428 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:02:36.671734   27428 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:02:36.671756   27428 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:02:36.685845   27428 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0629 11:02:36.685866   27428 cache.go:57] Caching tarball of preloaded images
	I0629 11:02:36.686190   27428 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0629 11:02:36.731933   27428 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0629 11:02:36.753085   27428 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0629 11:02:36.848748   27428 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0629 11:02:41.671247   27428 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0629 11:02:41.671493   27428 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0629 11:02:42.296885   27428 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0629 11:02:42.297121   27428 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/config.json ...
	I0629 11:02:42.297145   27428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/config.json: {Name:mk68649fa2a40cc7336c6aacba14dc1bb474b329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:02:42.306659   27428 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:02:42.306712   27428 start.go:352] acquiring machines lock for ingress-addon-legacy-20220629110235-24356: {Name:mkcb4bb1fd398d5afcd8943fa2dc88411907e0aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:02:42.306907   27428 start.go:356] acquired machines lock for "ingress-addon-legacy-20220629110235-24356" in 176.166µs
	I0629 11:02:42.306950   27428 start.go:91] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220629110235-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220629
110235-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:02:42.307020   27428 start.go:131] createHost starting for "" (driver="docker")
	I0629 11:02:42.353570   27428 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0629 11:02:42.353784   27428 start.go:165] libmachine.API.Create for "ingress-addon-legacy-20220629110235-24356" (driver="docker")
	I0629 11:02:42.353809   27428 client.go:168] LocalClient.Create starting
	I0629 11:02:42.353910   27428 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem
	I0629 11:02:42.353953   27428 main.go:134] libmachine: Decoding PEM data...
	I0629 11:02:42.353968   27428 main.go:134] libmachine: Parsing certificate...
	I0629 11:02:42.354015   27428 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem
	I0629 11:02:42.354048   27428 main.go:134] libmachine: Decoding PEM data...
	I0629 11:02:42.354060   27428 main.go:134] libmachine: Parsing certificate...
	I0629 11:02:42.354468   27428 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220629110235-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 11:02:42.418511   27428 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220629110235-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 11:02:42.418669   27428 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220629110235-24356] to gather additional debugging logs...
	I0629 11:02:42.418694   27428 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220629110235-24356
	W0629 11:02:42.481040   27428 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220629110235-24356 returned with exit code 1
	I0629 11:02:42.481066   27428 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220629110235-24356]: docker network inspect ingress-addon-legacy-20220629110235-24356: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220629110235-24356
	I0629 11:02:42.481081   27428 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220629110235-24356]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220629110235-24356
	
	** /stderr **
	I0629 11:02:42.481171   27428 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 11:02:42.544326   27428 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000416498] misses:0}
	I0629 11:02:42.544362   27428 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:02:42.544377   27428 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220629110235-24356 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 11:02:42.544540   27428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220629110235-24356 ingress-addon-legacy-20220629110235-24356
	I0629 11:02:42.637673   27428 network_create.go:99] docker network ingress-addon-legacy-20220629110235-24356 192.168.49.0/24 created
	I0629 11:02:42.637725   27428 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220629110235-24356" container
	I0629 11:02:42.637925   27428 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 11:02:42.700712   27428 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220629110235-24356 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220629110235-24356 --label created_by.minikube.sigs.k8s.io=true
	I0629 11:02:42.763755   27428 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220629110235-24356
	I0629 11:02:42.763891   27428 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220629110235-24356-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220629110235-24356 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220629110235-24356:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 11:02:43.223813   27428 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220629110235-24356
	I0629 11:02:43.223851   27428 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0629 11:02:43.223865   27428 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 11:02:43.223979   27428 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220629110235-24356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 11:02:47.869232   27428 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220629110235-24356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (4.645133132s)
	I0629 11:02:47.869251   27428 kic.go:188] duration metric: took 4.645361 seconds to extract preloaded images to volume
	I0629 11:02:47.869361   27428 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 11:02:47.991610   27428 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220629110235-24356 --name ingress-addon-legacy-20220629110235-24356 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220629110235-24356 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220629110235-24356 --network ingress-addon-legacy-20220629110235-24356 --ip 192.168.49.2 --volume ingress-addon-legacy-20220629110235-24356:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 11:02:48.360840   27428 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220629110235-24356 --format={{.State.Running}}
	I0629 11:02:48.431648   27428 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220629110235-24356 --format={{.State.Status}}
	I0629 11:02:48.509030   27428 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220629110235-24356 stat /var/lib/dpkg/alternatives/iptables
	I0629 11:02:48.635943   27428 oci.go:144] the created container "ingress-addon-legacy-20220629110235-24356" has a running status.
	I0629 11:02:48.635973   27428 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa...
	I0629 11:02:48.824689   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0629 11:02:48.824835   27428 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 11:02:48.939758   27428 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220629110235-24356 --format={{.State.Status}}
	I0629 11:02:49.006767   27428 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 11:02:49.006793   27428 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220629110235-24356 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 11:02:49.128175   27428 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220629110235-24356 --format={{.State.Status}}
	I0629 11:02:49.195016   27428 machine.go:88] provisioning docker machine ...
	I0629 11:02:49.195068   27428 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220629110235-24356"
	I0629 11:02:49.195233   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:49.262593   27428 main.go:134] libmachine: Using SSH client type: native
	I0629 11:02:49.262764   27428 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 50541 <nil> <nil>}
	I0629 11:02:49.262777   27428 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220629110235-24356 && echo "ingress-addon-legacy-20220629110235-24356" | sudo tee /etc/hostname
	I0629 11:02:49.393808   27428 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220629110235-24356
	
	I0629 11:02:49.393901   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:49.461348   27428 main.go:134] libmachine: Using SSH client type: native
	I0629 11:02:49.461524   27428 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 50541 <nil> <nil>}
	I0629 11:02:49.461539   27428 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220629110235-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220629110235-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220629110235-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:02:49.581516   27428 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:02:49.581533   27428 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:02:49.581560   27428 ubuntu.go:177] setting up certificates
	I0629 11:02:49.581566   27428 provision.go:83] configureAuth start
	I0629 11:02:49.581638   27428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:49.648694   27428 provision.go:138] copyHostCerts
	I0629 11:02:49.648726   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:02:49.648775   27428 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:02:49.648785   27428 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:02:49.648888   27428 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:02:49.649038   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:02:49.649069   27428 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:02:49.649076   27428 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:02:49.649136   27428 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:02:49.649251   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:02:49.649295   27428 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:02:49.649300   27428 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:02:49.649358   27428 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:02:49.649490   27428 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220629110235-24356 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220629110235-24356]
	I0629 11:02:49.698685   27428 provision.go:172] copyRemoteCerts
	I0629 11:02:49.698736   27428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:02:49.698775   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:49.766373   27428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50541 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa Username:docker}
	I0629 11:02:49.852818   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0629 11:02:49.852892   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:02:49.869789   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0629 11:02:49.869857   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
	I0629 11:02:49.886780   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0629 11:02:49.886848   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 11:02:49.903913   27428 provision.go:86] duration metric: configureAuth took 322.331549ms
	I0629 11:02:49.903927   27428 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:02:49.904064   27428 config.go:178] Loaded profile config "ingress-addon-legacy-20220629110235-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0629 11:02:49.904114   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:49.971887   27428 main.go:134] libmachine: Using SSH client type: native
	I0629 11:02:49.972038   27428 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 50541 <nil> <nil>}
	I0629 11:02:49.972056   27428 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:02:50.090296   27428 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:02:50.090313   27428 ubuntu.go:71] root file system type: overlay
	I0629 11:02:50.090475   27428 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:02:50.090554   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:50.157736   27428 main.go:134] libmachine: Using SSH client type: native
	I0629 11:02:50.157892   27428 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 50541 <nil> <nil>}
	I0629 11:02:50.157940   27428 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:02:50.288858   27428 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:02:50.288975   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:50.357305   27428 main.go:134] libmachine: Using SSH client type: native
	I0629 11:02:50.357470   27428 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 50541 <nil> <nil>}
	I0629 11:02:50.357485   27428 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:02:50.936398   27428 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:02:50.303010534 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0629 11:02:50.936420   27428 machine.go:91] provisioned docker machine in 1.741376704s
	I0629 11:02:50.936427   27428 client.go:171] LocalClient.Create took 8.582568427s
	I0629 11:02:50.936443   27428 start.go:173] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220629110235-24356" took 8.582612537s
	I0629 11:02:50.936453   27428 start.go:306] post-start starting for "ingress-addon-legacy-20220629110235-24356" (driver="docker")
	I0629 11:02:50.936457   27428 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:02:50.936531   27428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:02:50.936594   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:51.004917   27428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50541 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa Username:docker}
	I0629 11:02:51.092408   27428 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:02:51.095738   27428 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:02:51.095756   27428 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:02:51.095764   27428 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:02:51.095769   27428 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:02:51.095778   27428 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:02:51.095890   27428 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:02:51.096047   27428 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:02:51.096053   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> /etc/ssl/certs/243562.pem
	I0629 11:02:51.096219   27428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:02:51.103294   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:02:51.120872   27428 start.go:309] post-start completed in 184.410614ms
	I0629 11:02:51.121365   27428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:51.189264   27428 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/config.json ...
	I0629 11:02:51.189687   27428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:02:51.189732   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:51.257074   27428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50541 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa Username:docker}
	I0629 11:02:51.342296   27428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:02:51.346705   27428 start.go:134] duration metric: createHost completed in 9.039631618s
	I0629 11:02:51.346720   27428 start.go:81] releasing machines lock for "ingress-addon-legacy-20220629110235-24356", held for 9.039754313s
	I0629 11:02:51.346784   27428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:51.414003   27428 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:02:51.414039   27428 ssh_runner.go:195] Run: systemctl --version
	I0629 11:02:51.414081   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:51.414110   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:51.486087   27428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50541 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa Username:docker}
	I0629 11:02:51.487039   27428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50541 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa Username:docker}
	I0629 11:02:52.054630   27428 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:02:52.064893   27428 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:02:52.064948   27428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:02:52.074358   27428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:02:52.086657   27428 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:02:52.152842   27428 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:02:52.216455   27428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:02:52.285344   27428 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:02:52.487441   27428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:02:52.523157   27428 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:02:52.598083   27428 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	I0629 11:02:52.598169   27428 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220629110235-24356 dig +short host.docker.internal
	I0629 11:02:52.722670   27428 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:02:52.722776   27428 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:02:52.727090   27428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:02:52.736573   27428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:02:52.805750   27428 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0629 11:02:52.805839   27428 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:02:52.834936   27428 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0629 11:02:52.834951   27428 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:02:52.835035   27428 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:02:52.864651   27428 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0629 11:02:52.864677   27428 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:02:52.864739   27428 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:02:52.938014   27428 cni.go:95] Creating CNI manager for ""
	I0629 11:02:52.938027   27428 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:02:52.938045   27428 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:02:52.938085   27428 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220629110235-24356 NodeName:ingress-addon-legacy-20220629110235-24356 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:sy
stemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:02:52.938197   27428 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220629110235-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:02:52.938276   27428 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220629110235-24356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220629110235-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:02:52.938321   27428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0629 11:02:52.947648   27428 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:02:52.947702   27428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:02:52.954670   27428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0629 11:02:52.967330   27428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0629 11:02:52.980072   27428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0629 11:02:52.992824   27428 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:02:52.996533   27428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:02:53.005760   27428 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356 for IP: 192.168.49.2
	I0629 11:02:53.005882   27428 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:02:53.005940   27428 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:02:53.005979   27428 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/client.key
	I0629 11:02:53.005993   27428 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/client.crt with IP's: []
	I0629 11:02:53.364897   27428 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/client.crt ...
	I0629 11:02:53.364912   27428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/client.crt: {Name:mk23eb5dbb8f2f294eceacb5778e3cb4fb9e0241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:02:53.365221   27428 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/client.key ...
	I0629 11:02:53.365236   27428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/client.key: {Name:mk3ba4bc5fd0d2fa4a3ba7f391904b4d09dddc0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:02:53.365448   27428 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.key.dd3b5fb2
	I0629 11:02:53.365464   27428 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0629 11:02:53.638209   27428 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.crt.dd3b5fb2 ...
	I0629 11:02:53.638222   27428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.crt.dd3b5fb2: {Name:mk118212db610b720cd7a7966056d1c441fe1d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:02:53.638528   27428 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.key.dd3b5fb2 ...
	I0629 11:02:53.638537   27428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.key.dd3b5fb2: {Name:mk6e970ae93e65fc5cb523e634c303eab8f96791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:02:53.638753   27428 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.crt
	I0629 11:02:53.638931   27428 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.key
	I0629 11:02:53.639108   27428 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.key
	I0629 11:02:53.639124   27428 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.crt with IP's: []
	I0629 11:02:53.752305   27428 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.crt ...
	I0629 11:02:53.752314   27428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.crt: {Name:mk3f3e537738ed69997accf934e216ce77375cae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:02:53.752619   27428 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.key ...
	I0629 11:02:53.752628   27428 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.key: {Name:mkb8b96b487011c954154b226393f5ed2a0fe010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:02:53.752920   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0629 11:02:53.752948   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0629 11:02:53.752969   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0629 11:02:53.752993   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0629 11:02:53.753011   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0629 11:02:53.753028   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0629 11:02:53.753042   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0629 11:02:53.753061   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0629 11:02:53.753169   27428 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:02:53.753219   27428 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:02:53.753229   27428 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:02:53.753262   27428 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:02:53.753291   27428 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:02:53.753319   27428 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:02:53.753381   27428 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:02:53.753417   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:02:53.753434   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem -> /usr/share/ca-certificates/24356.pem
	I0629 11:02:53.753449   27428 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> /usr/share/ca-certificates/243562.pem
	I0629 11:02:53.753919   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:02:53.771412   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 11:02:53.787506   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:02:53.803582   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/ingress-addon-legacy-20220629110235-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 11:02:53.819786   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:02:53.836136   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:02:53.852493   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:02:53.868896   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:02:53.885485   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:02:53.901930   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:02:53.918903   27428 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:02:53.935536   27428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:02:53.948061   27428 ssh_runner.go:195] Run: openssl version
	I0629 11:02:53.953169   27428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:02:53.960706   27428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:02:53.964318   27428 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:02:53.964363   27428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:02:53.969771   27428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:02:53.977423   27428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:02:53.984841   27428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:02:53.988755   27428 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:02:53.988794   27428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:02:53.994242   27428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:02:54.002026   27428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:02:54.009663   27428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:02:54.013488   27428 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:02:54.013534   27428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:02:54.018704   27428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:02:54.026167   27428 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220629110235-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220629110235-24356 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:02:54.026259   27428 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:02:54.053793   27428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:02:54.061238   27428 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:02:54.068628   27428 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:02:54.068673   27428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:02:54.075924   27428 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:02:54.075947   27428 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:02:54.790856   27428 out.go:204]   - Generating certificates and keys ...
	I0629 11:02:57.328987   27428 out.go:204]   - Booting up control plane ...
	W0629 11:04:52.244728   27428 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220629110235-24356 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220629110235-24356 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0629 18:02:54.138711     952 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:02:57.312547     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:02:57.313780     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220629110235-24356 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220629110235-24356 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0629 18:02:54.138711     952 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:02:57.312547     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:02:57.313780     952 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:04:52.244764   27428 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:04:52.667979   27428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:04:52.677484   27428 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:04:52.677556   27428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:04:52.685058   27428 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:04:52.685079   27428 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:04:53.381042   27428 out.go:204]   - Generating certificates and keys ...
	I0629 11:04:54.124844   27428 out.go:204]   - Booting up control plane ...
	I0629 11:06:49.042151   27428 kubeadm.go:397] StartCluster complete in 3m55.014764702s
	I0629 11:06:49.042227   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:06:49.070507   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.070518   27428 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:06:49.070576   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:06:49.100664   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.100676   27428 logs.go:276] No container was found matching "etcd"
	I0629 11:06:49.100738   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:06:49.129339   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.129352   27428 logs.go:276] No container was found matching "coredns"
	I0629 11:06:49.129410   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:06:49.157217   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.157228   27428 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:06:49.157288   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:06:49.185536   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.185555   27428 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:06:49.185612   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:06:49.212186   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.212199   27428 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:06:49.212267   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:06:49.241488   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.241503   27428 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:06:49.241560   27428 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:06:49.270147   27428 logs.go:274] 0 containers: []
	W0629 11:06:49.270160   27428 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:06:49.270166   27428 logs.go:123] Gathering logs for kubelet ...
	I0629 11:06:49.270172   27428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:06:49.310607   27428 logs.go:123] Gathering logs for dmesg ...
	I0629 11:06:49.310620   27428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:06:49.322213   27428 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:06:49.322227   27428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:06:49.373357   27428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:06:49.373373   27428 logs.go:123] Gathering logs for Docker ...
	I0629 11:06:49.373380   27428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:06:49.388862   27428 logs.go:123] Gathering logs for container status ...
	I0629 11:06:49.388875   27428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:06:51.443429   27428 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054531471s)
	W0629 11:06:51.443547   27428 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0629 18:04:52.746961    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:04:54.126392    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:04:54.127953    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 11:06:51.443563   27428 out.go:239] * 
	* 
	W0629 11:06:51.443739   27428 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0629 18:04:52.746961    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:04:54.126392    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:04:54.127953    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0629 18:04:52.746961    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:04:54.126392    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:04:54.127953    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:06:51.443761   27428 out.go:239] * 
	* 
	W0629 11:06:51.444315   27428 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 11:06:51.525008   27428 out.go:177] 
	W0629 11:06:51.567209   27428 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0629 18:04:52.746961    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:04:54.126392    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:04:54.127953    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0629 18:04:52.746961    3453 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:04:54.126392    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:04:54.127953    3453 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:06:51.567380   27428 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 11:06:51.567452   27428 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 11:06:51.610081   27428 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220629110235-24356 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (255.91s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220629110235-24356 addons enable ingress --alsologtostderr -v=5
E0629 11:07:29.619744   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220629110235-24356 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.111832387s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:06:51.758139   27903 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:06:51.758312   27903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:06:51.758317   27903 out.go:309] Setting ErrFile to fd 2...
	I0629 11:06:51.758321   27903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:06:51.758595   27903 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:06:51.759052   27903 config.go:178] Loaded profile config "ingress-addon-legacy-20220629110235-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0629 11:06:51.759066   27903 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220629110235-24356"
	I0629 11:06:51.759073   27903 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220629110235-24356"
	I0629 11:06:51.759351   27903 host.go:66] Checking if "ingress-addon-legacy-20220629110235-24356" exists ...
	I0629 11:06:51.759837   27903 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220629110235-24356 --format={{.State.Status}}
	I0629 11:06:51.848817   27903 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0629 11:06:51.871050   27903 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0629 11:06:51.892871   27903 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0629 11:06:51.914699   27903 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0629 11:06:51.936629   27903 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0629 11:06:51.936651   27903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0629 11:06:51.936719   27903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:06:52.004686   27903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50541 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa Username:docker}
	I0629 11:06:52.102617   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:52.151565   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:52.151588   27903 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:52.428193   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:52.481632   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:52.481653   27903 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:53.023076   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:53.074501   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:53.074515   27903 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:53.731911   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:53.785097   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:53.785112   27903 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:54.578647   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:54.632726   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:54.632742   27903 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:55.803326   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:55.854828   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:55.854848   27903 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:58.108570   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:58.180211   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:58.180232   27903 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:59.793267   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:06:59.846466   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:06:59.846479   27903 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:02.653138   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:07:02.708376   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:02.708395   27903 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:06.535661   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:07:06.588981   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:06.588994   27903 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:14.287796   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:07:14.339789   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:14.339803   27903 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:28.975677   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:07:29.027695   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:29.027708   27903 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:57.434706   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:07:57.485681   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:07:57.485695   27903 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:20.654259   27903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0629 11:08:20.705685   27903 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:20.705707   27903 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220629110235-24356"
	I0629 11:08:20.727548   27903 out.go:177] * Verifying ingress addon...
	I0629 11:08:20.750621   27903 out.go:177] 
	W0629 11:08:20.772251   27903 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220629110235-24356" does not exist: client config: context "ingress-addon-legacy-20220629110235-24356" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220629110235-24356" does not exist: client config: context "ingress-addon-legacy-20220629110235-24356" does not exist]
	W0629 11:08:20.772286   27903 out.go:239] * 
	* 
	W0629 11:08:20.776013   27903 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 11:08:20.797392   27903 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220629110235-24356
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220629110235-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da",
	        "Created": "2022-06-29T18:02:48.070690482Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:02:48.367010963Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/hostname",
	        "HostsPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/hosts",
	        "LogPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da-json.log",
	        "Name": "/ingress-addon-legacy-20220629110235-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-20220629110235-24356:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220629110235-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220629110235-24356",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220629110235-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220629110235-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220629110235-24356",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220629110235-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2395a995516436f166797cbe4f5701243966803dbbe6dbf8fb230755f9ab3ddf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50541"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50537"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50538"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50540"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2395a9955164",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220629110235-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "63ecd5642f58",
	                        "ingress-addon-legacy-20220629110235-24356"
	                    ],
	                    "NetworkID": "583593cea3528b935721632920029846e1d59639f02246644b3e33eeea4ea195",
	                    "EndpointID": "8ec5c3afafbe38af100dd6e8c111413da19b9ca9dcf526da8b4fbf75a07117d8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220629110235-24356 -n ingress-addon-legacy-20220629110235-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220629110235-24356 -n ingress-addon-legacy-20220629110235-24356: exit status 6 (426.439614ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:08:21.308491   28001 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220629110235-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220629110235-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220629110235-24356 addons enable ingress-dns --alsologtostderr -v=5
E0629 11:08:51.540893   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220629110235-24356 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m28.998160984s)

                                                
                                                
-- stdout --
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:08:21.366496   28011 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:08:21.366671   28011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:08:21.366676   28011 out.go:309] Setting ErrFile to fd 2...
	I0629 11:08:21.366680   28011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:08:21.366930   28011 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:08:21.367389   28011 config.go:178] Loaded profile config "ingress-addon-legacy-20220629110235-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0629 11:08:21.367401   28011 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220629110235-24356"
	I0629 11:08:21.367408   28011 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220629110235-24356"
	I0629 11:08:21.367635   28011 host.go:66] Checking if "ingress-addon-legacy-20220629110235-24356" exists ...
	I0629 11:08:21.368136   28011 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220629110235-24356 --format={{.State.Status}}
	I0629 11:08:21.456238   28011 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0629 11:08:21.478686   28011 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0629 11:08:21.500344   28011 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0629 11:08:21.500385   28011 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0629 11:08:21.500515   28011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220629110235-24356
	I0629 11:08:21.568700   28011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50541 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/ingress-addon-legacy-20220629110235-24356/id_rsa Username:docker}
	I0629 11:08:21.658498   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:21.707527   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:21.707548   28011 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:21.984222   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:22.035875   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:22.035891   28011 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:22.578505   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:22.630904   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:22.630925   28011 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:23.286754   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:23.336262   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:23.336279   28011 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:24.128350   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:24.179206   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:24.179221   28011 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:25.349688   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:25.401532   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:25.401547   28011 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:27.654999   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:27.705254   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:27.705269   28011 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:29.317670   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:29.368225   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:29.368239   28011 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:32.173580   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:32.225626   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:32.225640   28011 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:36.052939   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:36.105270   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:36.105287   28011 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:43.805027   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:43.856346   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:43.856362   28011 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:58.494267   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:08:58.544815   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:08:58.544829   28011 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:09:26.953271   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:09:27.003863   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:09:27.003879   28011 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:09:50.173841   28011 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0629 11:09:50.225296   28011 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0629 11:09:50.247056   28011 out.go:177] 
	W0629 11:09:50.268329   28011 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0629 11:09:50.268357   28011 out.go:239] * 
	* 
	W0629 11:09:50.272491   28011 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 11:09:50.294115   28011 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220629110235-24356
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220629110235-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da",
	        "Created": "2022-06-29T18:02:48.070690482Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:02:48.367010963Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/hostname",
	        "HostsPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/hosts",
	        "LogPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da-json.log",
	        "Name": "/ingress-addon-legacy-20220629110235-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-20220629110235-24356:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220629110235-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220629110235-24356",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220629110235-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220629110235-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220629110235-24356",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220629110235-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2395a995516436f166797cbe4f5701243966803dbbe6dbf8fb230755f9ab3ddf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50541"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50537"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50538"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50540"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2395a9955164",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220629110235-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "63ecd5642f58",
	                        "ingress-addon-legacy-20220629110235-24356"
	                    ],
	                    "NetworkID": "583593cea3528b935721632920029846e1d59639f02246644b3e33eeea4ea195",
	                    "EndpointID": "8ec5c3afafbe38af100dd6e8c111413da19b9ca9dcf526da8b4fbf75a07117d8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220629110235-24356 -n ingress-addon-legacy-20220629110235-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220629110235-24356 -n ingress-addon-legacy-20220629110235-24356: exit status 6 (437.372869ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:09:50.814045   28114 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220629110235-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220629110235-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220629110235-24356
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220629110235-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da",
	        "Created": "2022-06-29T18:02:48.070690482Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:02:48.367010963Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/hostname",
	        "HostsPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/hosts",
	        "LogPath": "/var/lib/docker/containers/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da/63ecd5642f58efe047eba67ac3aab70d8062f35b904463673d0b50979df215da-json.log",
	        "Name": "/ingress-addon-legacy-20220629110235-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-20220629110235-24356:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220629110235-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/96d75ce0aeab89b91282627809c02dcf4c94171c07357cbcbc3864b1bd640ee6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220629110235-24356",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220629110235-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220629110235-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220629110235-24356",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220629110235-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2395a995516436f166797cbe4f5701243966803dbbe6dbf8fb230755f9ab3ddf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50541"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50537"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50538"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50539"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50540"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2395a9955164",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220629110235-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "63ecd5642f58",
	                        "ingress-addon-legacy-20220629110235-24356"
	                    ],
	                    "NetworkID": "583593cea3528b935721632920029846e1d59639f02246644b3e33eeea4ea195",
	                    "EndpointID": "8ec5c3afafbe38af100dd6e8c111413da19b9ca9dcf526da8b4fbf75a07117d8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220629110235-24356 -n ingress-addon-legacy-20220629110235-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220629110235-24356 -n ingress-addon-legacy-20220629110235-24356: exit status 6 (425.512007ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:09:51.314420   28126 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220629110235-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220629110235-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.50s)

                                                
                                    
x
+
TestPreload (271.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220629112211-24356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0629 11:22:30.759423   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:25:58.468778   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:26:07.702791   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220629112211-24356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m28.402263811s)

                                                
                                                
-- stdout --
	* [test-preload-20220629112211-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220629112211-24356 in cluster test-preload-20220629112211-24356
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:22:11.502435   31982 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:22:11.502638   31982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:22:11.502644   31982 out.go:309] Setting ErrFile to fd 2...
	I0629 11:22:11.502648   31982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:22:11.503007   31982 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:22:11.503324   31982 out.go:303] Setting JSON to false
	I0629 11:22:11.518119   31982 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8499,"bootTime":1656518432,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:22:11.518227   31982 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:22:11.539847   31982 out.go:177] * [test-preload-20220629112211-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:22:11.582111   31982 notify.go:193] Checking for updates...
	I0629 11:22:11.603742   31982 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:22:11.624728   31982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:22:11.645901   31982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:22:11.671912   31982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:22:11.693107   31982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:22:11.715355   31982 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:22:11.783837   31982 docker.go:137] docker version: linux-20.10.16
	I0629 11:22:11.783974   31982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:22:11.904872   31982 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-29 18:22:11.851974998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:22:11.926876   31982 out.go:177] * Using the docker driver based on user configuration
	I0629 11:22:11.948469   31982 start.go:284] selected driver: docker
	I0629 11:22:11.948491   31982 start.go:808] validating driver "docker" against <nil>
	I0629 11:22:11.948517   31982 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:22:11.951822   31982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:22:12.073143   31982 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:46 SystemTime:2022-06-29 18:22:12.020423683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:22:12.073251   31982 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 11:22:12.073396   31982 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:22:12.095188   31982 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 11:22:12.116956   31982 cni.go:95] Creating CNI manager for ""
	I0629 11:22:12.116989   31982 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:22:12.117004   31982 start_flags.go:310] config:
	{Name:test-preload-20220629112211-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220629112211-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:22:12.139089   31982 out.go:177] * Starting control plane node test-preload-20220629112211-24356 in cluster test-preload-20220629112211-24356
	I0629 11:22:12.181016   31982 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:22:12.203032   31982 out.go:177] * Pulling base image ...
	I0629 11:22:12.247098   31982 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0629 11:22:12.247143   31982 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:22:12.247422   31982 cache.go:107] acquiring lock: {Name:mkc37f8d0e96011347ac9c73f3e44a2eb3154087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.247464   31982 cache.go:107] acquiring lock: {Name:mk29619308787775d5ea7451998b5ef119fa9307 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.247472   31982 cache.go:107] acquiring lock: {Name:mk6bed0dd3ba25ab3075af560fca43b34826bf54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.249142   31982 cache.go:107] acquiring lock: {Name:mk31c25cde2df9fc32bddcc9e0bf895ebad98607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.249526   31982 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0629 11:22:12.249540   31982 cache.go:107] acquiring lock: {Name:mk1b9b71f9bb74d74e77b8dda4c5f093cfcacd6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.249580   31982 cache.go:107] acquiring lock: {Name:mkffba4c2e6a900cbe3272d7139bb5aa9cd234e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.249600   31982 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.17933ms
	I0629 11:22:12.249613   31982 cache.go:107] acquiring lock: {Name:mk16c74b1fc61b615ccd8cd4a1c26acb855287b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.249665   31982 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0629 11:22:12.249710   31982 cache.go:107] acquiring lock: {Name:mk3c5e9e281781e3cbb4925b5f02e00feb7150cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.250215   31982 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0629 11:22:12.250389   31982 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0629 11:22:12.250402   31982 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0629 11:22:12.250401   31982 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0629 11:22:12.250417   31982 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0629 11:22:12.250522   31982 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0629 11:22:12.250420   31982 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0629 11:22:12.250714   31982 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/config.json ...
	I0629 11:22:12.250778   31982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/config.json: {Name:mk32a371a08e42c87a6e752ec01607c2c1767091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:22:12.258547   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0629 11:22:12.258924   31982 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0629 11:22:12.259860   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0629 11:22:12.259907   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0629 11:22:12.260072   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0629 11:22:12.260968   31982 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0629 11:22:12.261050   31982 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0629 11:22:12.317226   31982 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:22:12.317258   31982 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:22:12.317275   31982 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:22:12.317324   31982 start.go:352] acquiring machines lock for test-preload-20220629112211-24356: {Name:mk68992fcfeddf30cf89637d3b765ee397f9ee43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:22:12.317456   31982 start.go:356] acquired machines lock for "test-preload-20220629112211-24356" in 120.395µs
	I0629 11:22:12.317481   31982 start.go:91] Provisioning new machine with config: &{Name:test-preload-20220629112211-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220629112211-24356 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:22:12.317564   31982 start.go:131] createHost starting for "" (driver="docker")
	I0629 11:22:12.339492   31982 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0629 11:22:12.339798   31982 start.go:165] libmachine.API.Create for "test-preload-20220629112211-24356" (driver="docker")
	I0629 11:22:12.339825   31982 client.go:168] LocalClient.Create starting
	I0629 11:22:12.339886   31982 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem
	I0629 11:22:12.339917   31982 main.go:134] libmachine: Decoding PEM data...
	I0629 11:22:12.339935   31982 main.go:134] libmachine: Parsing certificate...
	I0629 11:22:12.339989   31982 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem
	I0629 11:22:12.340013   31982 main.go:134] libmachine: Decoding PEM data...
	I0629 11:22:12.340045   31982 main.go:134] libmachine: Parsing certificate...
	I0629 11:22:12.340475   31982 cli_runner.go:164] Run: docker network inspect test-preload-20220629112211-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 11:22:12.409316   31982 cli_runner.go:211] docker network inspect test-preload-20220629112211-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 11:22:12.409810   31982 network_create.go:272] running [docker network inspect test-preload-20220629112211-24356] to gather additional debugging logs...
	I0629 11:22:12.409857   31982 cli_runner.go:164] Run: docker network inspect test-preload-20220629112211-24356
	W0629 11:22:12.473552   31982 cli_runner.go:211] docker network inspect test-preload-20220629112211-24356 returned with exit code 1
	I0629 11:22:12.473578   31982 network_create.go:275] error running [docker network inspect test-preload-20220629112211-24356]: docker network inspect test-preload-20220629112211-24356: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220629112211-24356
	I0629 11:22:12.473604   31982 network_create.go:277] output of [docker network inspect test-preload-20220629112211-24356]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220629112211-24356
	
	** /stderr **
	I0629 11:22:12.473667   31982 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 11:22:12.539248   31982 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc001120230] misses:0}
	I0629 11:22:12.539285   31982 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:22:12.539302   31982 network_create.go:115] attempt to create docker network test-preload-20220629112211-24356 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 11:22:12.539359   31982 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 test-preload-20220629112211-24356
	W0629 11:22:12.602664   31982 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 test-preload-20220629112211-24356 returned with exit code 1
	W0629 11:22:12.602711   31982 network_create.go:107] failed to create docker network test-preload-20220629112211-24356 192.168.49.0/24, will retry: subnet is taken
	I0629 11:22:12.602961   31982 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001120230] amended:false}} dirty:map[] misses:0}
	I0629 11:22:12.602977   31982 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:22:12.603183   31982 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001120230] amended:true}} dirty:map[192.168.49.0:0xc001120230 192.168.58.0:0xc001120278] misses:0}
	I0629 11:22:12.603196   31982 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:22:12.603202   31982 network_create.go:115] attempt to create docker network test-preload-20220629112211-24356 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 11:22:12.603254   31982 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 test-preload-20220629112211-24356
	W0629 11:22:12.665808   31982 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 test-preload-20220629112211-24356 returned with exit code 1
	W0629 11:22:12.665843   31982 network_create.go:107] failed to create docker network test-preload-20220629112211-24356 192.168.58.0/24, will retry: subnet is taken
	I0629 11:22:12.666095   31982 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001120230] amended:true}} dirty:map[192.168.49.0:0xc001120230 192.168.58.0:0xc001120278] misses:1}
	I0629 11:22:12.666117   31982 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:22:12.666307   31982 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc001120230] amended:true}} dirty:map[192.168.49.0:0xc001120230 192.168.58.0:0xc001120278 192.168.67.0:0xc00071c550] misses:1}
	I0629 11:22:12.666320   31982 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:22:12.666328   31982 network_create.go:115] attempt to create docker network test-preload-20220629112211-24356 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 11:22:12.666385   31982 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 test-preload-20220629112211-24356
	I0629 11:22:12.759854   31982 network_create.go:99] docker network test-preload-20220629112211-24356 192.168.67.0/24 created
	I0629 11:22:12.759889   31982 kic.go:106] calculated static IP "192.168.67.2" for the "test-preload-20220629112211-24356" container
	I0629 11:22:12.759974   31982 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 11:22:12.822570   31982 cli_runner.go:164] Run: docker volume create test-preload-20220629112211-24356 --label name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 --label created_by.minikube.sigs.k8s.io=true
	I0629 11:22:12.885359   31982 oci.go:103] Successfully created a docker volume test-preload-20220629112211-24356
	I0629 11:22:12.885464   31982 cli_runner.go:164] Run: docker run --rm --name test-preload-20220629112211-24356-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 --entrypoint /usr/bin/test -v test-preload-20220629112211-24356:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 11:22:13.327894   31982 oci.go:107] Successfully prepared a docker volume test-preload-20220629112211-24356
	I0629 11:22:13.327938   31982 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0629 11:22:13.328003   31982 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 11:22:13.451062   31982 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220629112211-24356 --name test-preload-20220629112211-24356 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220629112211-24356 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220629112211-24356 --network test-preload-20220629112211-24356 --ip 192.168.67.2 --volume test-preload-20220629112211-24356:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 11:22:13.854964   31982 cli_runner.go:164] Run: docker container inspect test-preload-20220629112211-24356 --format={{.State.Running}}
	I0629 11:22:13.924442   31982 cli_runner.go:164] Run: docker container inspect test-preload-20220629112211-24356 --format={{.State.Status}}
	I0629 11:22:14.000932   31982 cli_runner.go:164] Run: docker exec test-preload-20220629112211-24356 stat /var/lib/dpkg/alternatives/iptables
	I0629 11:22:14.134901   31982 oci.go:144] the created container "test-preload-20220629112211-24356" has a running status.
	I0629 11:22:14.134924   31982 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/test-preload-20220629112211-24356/id_rsa...
	I0629 11:22:14.248399   31982 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/test-preload-20220629112211-24356/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 11:22:14.269568   31982 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0629 11:22:14.358508   31982 cli_runner.go:164] Run: docker container inspect test-preload-20220629112211-24356 --format={{.State.Status}}
	I0629 11:22:14.410897   31982 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0629 11:22:14.424297   31982 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 11:22:14.424309   31982 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220629112211-24356 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 11:22:14.480085   31982 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0629 11:22:14.480117   31982 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 2.230592258s
	I0629 11:22:14.480133   31982 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0629 11:22:14.483465   31982 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0629 11:22:14.494892   31982 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0629 11:22:14.538862   31982 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0629 11:22:14.541098   31982 cli_runner.go:164] Run: docker container inspect test-preload-20220629112211-24356 --format={{.State.Status}}
	I0629 11:22:14.543283   31982 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0629 11:22:14.552454   31982 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0629 11:22:14.608478   31982 machine.go:88] provisioning docker machine ...
	I0629 11:22:14.608506   31982 ubuntu.go:169] provisioning hostname "test-preload-20220629112211-24356"
	I0629 11:22:14.608575   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:14.675120   31982 main.go:134] libmachine: Using SSH client type: native
	I0629 11:22:14.675310   31982 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 54090 <nil> <nil>}
	I0629 11:22:14.675323   31982 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220629112211-24356 && echo "test-preload-20220629112211-24356" | sudo tee /etc/hostname
	I0629 11:22:14.804582   31982 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220629112211-24356
	
	I0629 11:22:14.804663   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:14.872701   31982 main.go:134] libmachine: Using SSH client type: native
	I0629 11:22:14.872838   31982 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 54090 <nil> <nil>}
	I0629 11:22:14.872853   31982 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220629112211-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220629112211-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220629112211-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:22:14.993276   31982 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:22:14.993294   31982 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:22:14.993311   31982 ubuntu.go:177] setting up certificates
	I0629 11:22:14.993316   31982 provision.go:83] configureAuth start
	I0629 11:22:14.993375   31982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220629112211-24356
	I0629 11:22:15.059635   31982 provision.go:138] copyHostCerts
	I0629 11:22:15.059790   31982 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:22:15.059799   31982 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:22:15.059890   31982 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:22:15.060071   31982 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:22:15.060093   31982 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:22:15.060177   31982 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:22:15.060348   31982 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:22:15.060355   31982 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:22:15.060416   31982 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:22:15.060540   31982 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220629112211-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220629112211-24356]
	I0629 11:22:15.113083   31982 provision.go:172] copyRemoteCerts
	I0629 11:22:15.113133   31982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:22:15.113174   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:15.180598   31982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54090 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/test-preload-20220629112211-24356/id_rsa Username:docker}
	I0629 11:22:15.268296   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 11:22:15.285528   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:22:15.302110   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0629 11:22:15.318344   31982 provision.go:86] duration metric: configureAuth took 325.010881ms
	I0629 11:22:15.318357   31982 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:22:15.318486   31982 config.go:178] Loaded profile config "test-preload-20220629112211-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0629 11:22:15.318538   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:15.385941   31982 main.go:134] libmachine: Using SSH client type: native
	I0629 11:22:15.386154   31982 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 54090 <nil> <nil>}
	I0629 11:22:15.386169   31982 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:22:15.505995   31982 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:22:15.506017   31982 ubuntu.go:71] root file system type: overlay
	I0629 11:22:15.506140   31982 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:22:15.506212   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:15.573408   31982 main.go:134] libmachine: Using SSH client type: native
	I0629 11:22:15.573540   31982 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 54090 <nil> <nil>}
	I0629 11:22:15.573604   31982 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:22:15.700568   31982 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:22:15.700646   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:15.771255   31982 main.go:134] libmachine: Using SSH client type: native
	I0629 11:22:15.771401   31982 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 54090 <nil> <nil>}
	I0629 11:22:15.771415   31982 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:22:16.353768   31982 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:22:15.712166777 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0629 11:22:16.353794   31982 machine.go:91] provisioned docker machine in 1.745281792s
	I0629 11:22:16.353803   31982 client.go:171] LocalClient.Create took 4.013922972s
	I0629 11:22:16.353824   31982 start.go:173] duration metric: libmachine.API.Create for "test-preload-20220629112211-24356" took 4.013971431s
	I0629 11:22:16.353835   31982 start.go:306] post-start starting for "test-preload-20220629112211-24356" (driver="docker")
	I0629 11:22:16.353842   31982 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:22:16.353912   31982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:22:16.353975   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:16.423113   31982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54090 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/test-preload-20220629112211-24356/id_rsa Username:docker}
	I0629 11:22:16.508907   31982 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:22:16.534517   31982 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:22:16.534531   31982 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:22:16.534537   31982 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:22:16.534545   31982 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:22:16.534555   31982 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:22:16.534665   31982 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:22:16.534804   31982 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:22:16.534948   31982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:22:16.542509   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:22:16.559889   31982 start.go:309] post-start completed in 206.04378ms
	I0629 11:22:16.560429   31982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220629112211-24356
	I0629 11:22:16.629631   31982 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/config.json ...
	I0629 11:22:16.630036   31982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:22:16.630089   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:16.685183   31982 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0629 11:22:16.685210   31982 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 4.435704223s
	I0629 11:22:16.685224   31982 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0629 11:22:16.699873   31982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54090 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/test-preload-20220629112211-24356/id_rsa Username:docker}
	I0629 11:22:16.782224   31982 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:22:16.787048   31982 start.go:134] duration metric: createHost completed in 4.469419867s
	I0629 11:22:16.787065   31982 start.go:81] releasing machines lock for "test-preload-20220629112211-24356", held for 4.469545844s
	I0629 11:22:16.787154   31982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220629112211-24356
	I0629 11:22:16.856711   31982 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:22:16.856826   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:16.924041   31982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54090 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/test-preload-20220629112211-24356/id_rsa Username:docker}
	I0629 11:22:18.578792   31982 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0629 11:22:18.578814   31982 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 6.329616175s
	I0629 11:22:18.578822   31982 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0629 11:22:18.719774   31982 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0629 11:22:18.719792   31982 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 6.472305837s
	I0629 11:22:18.719807   31982 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0629 11:22:19.443393   31982 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0629 11:22:19.443408   31982 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 7.195834855s
	I0629 11:22:19.443417   31982 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0629 11:22:21.562588   31982 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0629 11:22:21.562608   31982 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 9.315061054s
	I0629 11:22:21.562616   31982 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0629 11:22:22.900298   31982 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0629 11:22:22.900315   31982 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 10.65077582s
	I0629 11:22:22.900323   31982 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0629 11:22:22.900335   31982 cache.go:87] Successfully saved all images to host disk.
	I0629 11:22:22.900388   31982 ssh_runner.go:195] Run: systemctl --version
	I0629 11:22:22.905321   31982 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:22:22.914690   31982 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:22:22.914735   31982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:22:22.924321   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:22:22.937088   31982 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:22:23.006615   31982 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:22:23.072297   31982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:22:23.139265   31982 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:22:23.338575   31982 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:22:23.374942   31982 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:22:23.453662   31982 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	I0629 11:22:23.453850   31982 cli_runner.go:164] Run: docker exec -t test-preload-20220629112211-24356 dig +short host.docker.internal
	I0629 11:22:23.574618   31982 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:22:23.574986   31982 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:22:23.579358   31982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:22:23.588664   31982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220629112211-24356
	I0629 11:22:23.655883   31982 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0629 11:22:23.655947   31982 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:22:23.685168   31982 docker.go:602] Got preloaded images: 
	I0629 11:22:23.685180   31982 docker.go:608] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0629 11:22:23.685186   31982 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0629 11:22:23.692190   31982 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 11:22:23.692586   31982 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0629 11:22:23.693166   31982 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0629 11:22:23.693507   31982 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0629 11:22:23.694137   31982 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0629 11:22:23.694502   31982 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0629 11:22:23.695862   31982 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0629 11:22:23.696189   31982 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0629 11:22:23.699912   31982 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 11:22:23.700891   31982 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0629 11:22:23.702571   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0629 11:22:23.702594   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0629 11:22:23.702600   31982 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0629 11:22:23.702720   31982 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0629 11:22:23.702871   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0629 11:22:23.703120   31982 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0629 11:22:24.993407   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0629 11:22:24.995375   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0629 11:22:25.025778   31982 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0629 11:22:25.025797   31982 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0629 11:22:25.025810   31982 docker.go:283] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0629 11:22:25.025815   31982 docker.go:283] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0629 11:22:25.025866   31982 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0629 11:22:25.025867   31982 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0629 11:22:25.058426   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0629 11:22:25.058443   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0629 11:22:25.058553   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0629 11:22:25.058554   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0629 11:22:25.062775   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0629 11:22:25.062795   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0629 11:22:25.062920   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0629 11:22:25.062937   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0629 11:22:25.078993   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0629 11:22:25.125351   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0629 11:22:25.127175   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0629 11:22:25.129959   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0629 11:22:25.151098   31982 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0629 11:22:25.151122   31982 docker.go:283] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0629 11:22:25.151185   31982 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0629 11:22:25.208799   31982 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0629 11:22:25.208822   31982 docker.go:283] Removing image: k8s.gcr.io/pause:3.1
	I0629 11:22:25.208872   31982 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0629 11:22:25.210704   31982 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0629 11:22:25.210733   31982 docker.go:283] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0629 11:22:25.210779   31982 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0629 11:22:25.218887   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0629 11:22:25.239596   31982 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0629 11:22:25.239654   31982 docker.go:283] Removing image: k8s.gcr.io/coredns:1.6.5
	I0629 11:22:25.239756   31982 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0629 11:22:25.251624   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0629 11:22:25.251744   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0629 11:22:25.303556   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0629 11:22:25.303678   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0629 11:22:25.322851   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0629 11:22:25.322976   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0629 11:22:25.347977   31982 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0629 11:22:25.348007   31982 docker.go:283] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0629 11:22:25.348084   31982 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0629 11:22:25.352145   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0629 11:22:25.352180   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0629 11:22:25.352208   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0629 11:22:25.352295   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0629 11:22:25.366618   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0629 11:22:25.366662   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0629 11:22:25.383937   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0629 11:22:25.383967   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0629 11:22:25.445847   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0629 11:22:25.445879   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0629 11:22:25.446795   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0629 11:22:25.446941   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0629 11:22:25.475692   31982 docker.go:250] Loading image: /var/lib/minikube/images/pause_3.1
	I0629 11:22:25.475706   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0629 11:22:25.507761   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0629 11:22:25.507796   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0629 11:22:25.560528   31982 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 11:22:25.737412   31982 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0629 11:22:25.737445   31982 docker.go:283] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 11:22:25.737514   31982 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 11:22:25.738719   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0629 11:22:25.828200   31982 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0629 11:22:25.828331   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0629 11:22:25.893612   31982 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0629 11:22:25.893655   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0629 11:22:26.849100   31982 docker.go:250] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0629 11:22:26.849141   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0629 11:22:27.680707   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0629 11:22:27.680745   31982 docker.go:250] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0629 11:22:27.680768   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0629 11:22:28.264625   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0629 11:22:28.696409   31982 docker.go:250] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0629 11:22:28.696435   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0629 11:22:30.526595   31982 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load": (1.830119728s)
	I0629 11:22:30.526610   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0629 11:22:30.526632   31982 docker.go:250] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0629 11:22:30.526642   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0629 11:22:31.554416   31982 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.027743141s)
	I0629 11:22:31.554429   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0629 11:22:31.554453   31982 docker.go:250] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0629 11:22:31.554472   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0629 11:22:32.662149   31982 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load": (1.107649328s)
	I0629 11:22:32.662162   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0629 11:22:32.662183   31982 docker.go:250] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0629 11:22:32.662190   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0629 11:22:33.610665   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0629 11:22:33.610696   31982 docker.go:250] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0629 11:22:33.610707   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0629 11:22:36.765020   31982 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (3.154249002s)
	I0629 11:22:36.765036   31982 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0629 11:22:36.765064   31982 cache_images.go:123] Successfully loaded all cached images
	I0629 11:22:36.765073   31982 cache_images.go:92] LoadImages completed in 13.079712097s
	I0629 11:22:36.765159   31982 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:22:36.841306   31982 cni.go:95] Creating CNI manager for ""
	I0629 11:22:36.841318   31982 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:22:36.841330   31982 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:22:36.841342   31982 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220629112211-24356 NodeName:test-preload-20220629112211-24356 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:22:36.841444   31982 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220629112211-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:22:36.841509   31982 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220629112211-24356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220629112211-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:22:36.841569   31982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0629 11:22:36.850565   31982 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0629 11:22:36.850610   31982 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0629 11:22:36.858116   31982 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0629 11:22:36.858119   31982 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0629 11:22:36.858127   31982 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0629 11:22:37.944981   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0629 11:22:37.950005   31982 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0629 11:22:37.950036   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0629 11:22:38.910886   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0629 11:22:38.915164   31982 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0629 11:22:38.915188   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0629 11:22:39.312690   31982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:22:39.384614   31982 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0629 11:22:39.450038   31982 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0629 11:22:39.450074   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0629 11:22:41.329606   31982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:22:41.337451   31982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0629 11:22:41.350547   31982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:22:41.363320   31982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0629 11:22:41.375670   31982 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:22:41.379784   31982 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:22:41.389899   31982 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356 for IP: 192.168.67.2
	I0629 11:22:41.390024   31982 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:22:41.390076   31982 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:22:41.390115   31982 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/client.key
	I0629 11:22:41.390129   31982 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/client.crt with IP's: []
	I0629 11:22:41.480994   31982 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/client.crt ...
	I0629 11:22:41.481004   31982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/client.crt: {Name:mk79db20f410a4a88bb0e9a3c0b09d5ca42cbbb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:22:41.481304   31982 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/client.key ...
	I0629 11:22:41.481313   31982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/client.key: {Name:mk427cc5d0f72cd9979dd990307fcef9856136c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:22:41.481536   31982 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.key.c7fa3a9e
	I0629 11:22:41.481554   31982 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0629 11:22:41.576703   31982 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.crt.c7fa3a9e ...
	I0629 11:22:41.576712   31982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.crt.c7fa3a9e: {Name:mk452e4ed271c7f08e41ac4637f182a2c51f5f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:22:41.576961   31982 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.key.c7fa3a9e ...
	I0629 11:22:41.576969   31982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.key.c7fa3a9e: {Name:mk05106d5c71f853bd30fd9337f991ecee1e40e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:22:41.577158   31982 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.crt
	I0629 11:22:41.577316   31982 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.key
	I0629 11:22:41.577482   31982 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.key
	I0629 11:22:41.577501   31982 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.crt with IP's: []
	I0629 11:22:41.716298   31982 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.crt ...
	I0629 11:22:41.716314   31982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.crt: {Name:mk233c06403563648b01149069c6f3882dd888a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:22:41.716568   31982 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.key ...
	I0629 11:22:41.716576   31982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.key: {Name:mk27c1477a9f1f93748a9cc576060a389eb1d9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:22:41.716962   31982 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:22:41.717002   31982 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:22:41.717011   31982 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:22:41.717041   31982 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:22:41.717076   31982 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:22:41.717107   31982 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:22:41.717192   31982 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:22:41.717711   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:22:41.735860   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 11:22:41.753192   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:22:41.770932   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/test-preload-20220629112211-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 11:22:41.787866   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:22:41.805074   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:22:41.822200   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:22:41.839298   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:22:41.856449   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:22:41.873680   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:22:41.890898   31982 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:22:41.907906   31982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:22:41.920429   31982 ssh_runner.go:195] Run: openssl version
	I0629 11:22:41.925562   31982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:22:41.934027   31982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:22:41.938286   31982 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:22:41.938329   31982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:22:41.943692   31982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:22:41.952013   31982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:22:41.960407   31982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:22:41.964296   31982 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:22:41.964338   31982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:22:41.969270   31982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:22:41.977008   31982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:22:41.984982   31982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:22:41.988995   31982 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:22:41.989050   31982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:22:41.994474   31982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:22:42.002501   31982 kubeadm.go:395] StartCluster: {Name:test-preload-20220629112211-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220629112211-24356 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:22:42.002640   31982 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:22:42.031095   31982 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:22:42.038804   31982 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:22:42.045965   31982 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:22:42.046009   31982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:22:42.053336   31982 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:22:42.053369   31982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:22:42.774226   31982 out.go:204]   - Generating certificates and keys ...
	I0629 11:22:45.543051   31982 out.go:204]   - Booting up control plane ...
	W0629 11:24:40.462458   31982 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220629112211-24356 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220629112211-24356 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0629 18:22:42.106993    1574 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0629 18:22:42.107045    1574 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:22:45.540012    1574 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:22:45.540754    1574 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220629112211-24356 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220629112211-24356 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0629 18:22:42.106993    1574 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0629 18:22:42.107045    1574 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:22:45.540012    1574 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:22:45.540754    1574 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:24:40.462491   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:24:40.881643   31982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:24:40.892005   31982 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:24:40.892059   31982 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:24:40.899667   31982 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:24:40.899690   31982 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:24:41.581474   31982 out.go:204]   - Generating certificates and keys ...
	I0629 11:24:42.327465   31982 out.go:204]   - Booting up control plane ...
	I0629 11:26:37.249386   31982 kubeadm.go:397] StartCluster complete in 3m55.244015359s
	I0629 11:26:37.249462   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:26:37.277437   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.277448   31982 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:26:37.277502   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:26:37.305735   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.305746   31982 logs.go:276] No container was found matching "etcd"
	I0629 11:26:37.305806   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:26:37.335212   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.335225   31982 logs.go:276] No container was found matching "coredns"
	I0629 11:26:37.335284   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:26:37.364063   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.364076   31982 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:26:37.364132   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:26:37.393245   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.393258   31982 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:26:37.393315   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:26:37.422784   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.422796   31982 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:26:37.422855   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:26:37.451431   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.451444   31982 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:26:37.451500   31982 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:26:37.480813   31982 logs.go:274] 0 containers: []
	W0629 11:26:37.480825   31982 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:26:37.480833   31982 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:26:37.480842   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:26:37.532093   31982 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:26:37.532105   31982 logs.go:123] Gathering logs for Docker ...
	I0629 11:26:37.532111   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:26:37.548673   31982 logs.go:123] Gathering logs for container status ...
	I0629 11:26:37.548684   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:26:39.604675   31982 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055953904s)
	I0629 11:26:39.604788   31982 logs.go:123] Gathering logs for kubelet ...
	I0629 11:26:39.604795   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:26:39.645696   31982 logs.go:123] Gathering logs for dmesg ...
	I0629 11:26:39.645741   31982 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0629 11:26:39.659085   31982 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0629 18:24:40.952071    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0629 18:24:40.952122    3853 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:24:42.322034    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:24:42.322745    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 11:26:39.659103   31982 out.go:239] * 
	* 
	W0629 11:26:39.659219   31982 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0629 18:24:40.952071    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0629 18:24:40.952122    3853 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:24:42.322034    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:24:42.322745    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0629 18:24:40.952071    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0629 18:24:40.952122    3853 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:24:42.322034    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:24:42.322745    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:26:39.659235   31982 out.go:239] * 
	* 
	W0629 11:26:39.659760   31982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 11:26:39.723603   31982 out.go:177] 
	W0629 11:26:39.766815   31982 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0629 18:24:40.952071    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0629 18:24:40.952122    3853 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:24:42.322034    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:24:42.322745    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0629 18:24:40.952071    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0629 18:24:40.952122    3853 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0629 18:24:42.322034    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0629 18:24:42.322745    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:26:39.766945   31982 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 11:26:39.767025   31982 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 11:26:39.788651   31982 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220629112211-24356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-06-29 11:26:39.901304 -0700 PDT m=+2066.358161573
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220629112211-24356
helpers_test.go:235: (dbg) docker inspect test-preload-20220629112211-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa9a52a9fa06eacdc67cc603afa461ce52b258b3b229840a1f9b07c6f4874a15",
	        "Created": "2022-06-29T18:22:13.526379134Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 105880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:22:13.85828447Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/aa9a52a9fa06eacdc67cc603afa461ce52b258b3b229840a1f9b07c6f4874a15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa9a52a9fa06eacdc67cc603afa461ce52b258b3b229840a1f9b07c6f4874a15/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa9a52a9fa06eacdc67cc603afa461ce52b258b3b229840a1f9b07c6f4874a15/hosts",
	        "LogPath": "/var/lib/docker/containers/aa9a52a9fa06eacdc67cc603afa461ce52b258b3b229840a1f9b07c6f4874a15/aa9a52a9fa06eacdc67cc603afa461ce52b258b3b229840a1f9b07c6f4874a15-json.log",
	        "Name": "/test-preload-20220629112211-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220629112211-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220629112211-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6adfba26e6edb23202bef2a297b04aa328d4d2a9f50efcb8aa6429998b0d6321-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6adfba26e6edb23202bef2a297b04aa328d4d2a9f50efcb8aa6429998b0d6321/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6adfba26e6edb23202bef2a297b04aa328d4d2a9f50efcb8aa6429998b0d6321/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6adfba26e6edb23202bef2a297b04aa328d4d2a9f50efcb8aa6429998b0d6321/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220629112211-24356",
	                "Source": "/var/lib/docker/volumes/test-preload-20220629112211-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220629112211-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220629112211-24356",
	                "name.minikube.sigs.k8s.io": "test-preload-20220629112211-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c9b06403b153e7f2b418864df6fc196d078bad6503623298c3a0719f86818cdc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54090"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54088"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54089"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c9b06403b153",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220629112211-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aa9a52a9fa06",
	                        "test-preload-20220629112211-24356"
	                    ],
	                    "NetworkID": "7d87efe8de1c3fb6a78460db79feeb42fa9081ca5c5e93281794c3ee95b5c074",
	                    "EndpointID": "6b43de21db3e1c13a1f295f46d63111b07e74f2f1a6bc2575d5db4c98f751f09",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220629112211-24356 -n test-preload-20220629112211-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220629112211-24356 -n test-preload-20220629112211-24356: exit status 6 (425.180543ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:26:40.386610   32447 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220629112211-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220629112211-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220629112211-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220629112211-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220629112211-24356: (2.540642028s)
--- FAIL: TestPreload (271.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1620201363.exe start -p running-upgrade-20220629113205-24356 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1620201363.exe start -p running-upgrade-20220629113205-24356 --memory=2200 --vm-driver=docker : exit status 70 (56.929123969s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220629113205-24356] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2310987540
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:32:44.194259849 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220629113205-24356" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:33:00.647760702 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220629113205-24356", then "minikube start -p running-upgrade-20220629113205-24356 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 79.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.09 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 139.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 242.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 306.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 394.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 416.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 478.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 492.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 527.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:33:00.647760702 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1620201363.exe start -p running-upgrade-20220629113205-24356 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1620201363.exe start -p running-upgrade-20220629113205-24356 --memory=2200 --vm-driver=docker : exit status 70 (5.06767091s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220629113205-24356] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3711243022
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220629113205-24356" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1620201363.exe start -p running-upgrade-20220629113205-24356 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1620201363.exe start -p running-upgrade-20220629113205-24356 --memory=2200 --vm-driver=docker : exit status 70 (4.767507543s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220629113205-24356] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1952420034
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220629113205-24356" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-06-29 11:33:14.52844 -0700 PDT m=+2460.958010867
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220629113205-24356
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220629113205-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "855722173a399bdecf45487d3bc9d623524638e4ddc0e2a235583f0e2c0f0dbf",
	        "Created": "2022-06-29T18:32:52.412590063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 141090,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:32:52.637348846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/855722173a399bdecf45487d3bc9d623524638e4ddc0e2a235583f0e2c0f0dbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/855722173a399bdecf45487d3bc9d623524638e4ddc0e2a235583f0e2c0f0dbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/855722173a399bdecf45487d3bc9d623524638e4ddc0e2a235583f0e2c0f0dbf/hosts",
	        "LogPath": "/var/lib/docker/containers/855722173a399bdecf45487d3bc9d623524638e4ddc0e2a235583f0e2c0f0dbf/855722173a399bdecf45487d3bc9d623524638e4ddc0e2a235583f0e2c0f0dbf-json.log",
	        "Name": "/running-upgrade-20220629113205-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220629113205-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5616b246656dd5d6b6e3f992a80cd4718a0252a95e9897167627eb78844a7a5-init/diff:/var/lib/docker/overlay2/8b8b79709b808eaa27a04e2ec296f1b2d21c5d25614b9d1347d1fd8285409cef/diff:/var/lib/docker/overlay2/7574f2f1bbb9d21a17ced2d509fbd098e1d8b2fb202e936dd5f1c0be8d30e813/diff:/var/lib/docker/overlay2/5029e661ba0bdf2f7295c0f7b33739da7b0ff62c1d9a87125e26cfac57c158e5/diff:/var/lib/docker/overlay2/eeaea74acabb962979a44d0d4f74715548948a7c291f4e8234095cd17b24f658/diff:/var/lib/docker/overlay2/e32cfafd4170cab3fe8b3ebbdae424666050b7a451ce0b3d793e0c8fe4d36180/diff:/var/lib/docker/overlay2/96a607706312a1042389375c84fdfb79339f36409afc5c119af55288d423b9a1/diff:/var/lib/docker/overlay2/cc80edf1fa40a1935a9ca67b8fd864978912d0ad09469d13c62261d83f4fff4a/diff:/var/lib/docker/overlay2/3441df5b815fa8635ca545ade8febbed1e2b1a9efe0a226cdb1c735bd0ea955e/diff:/var/lib/docker/overlay2/018b402027d28b2174d00d507daaf1145a05d8d61476db538fb07a2727212ac9/diff:/var/lib/docker/overlay2/056157
fb82ca1cc502427bbb658c3194c224632045154515cae8817675d79c29/diff:/var/lib/docker/overlay2/262548fcd077bf710edff1d9d1397f49654d654564525d61becc910a047cb35f/diff:/var/lib/docker/overlay2/fbe9d134fa113f2f913d2b646478f35fd967983667130f21a4ac49fc3eb3a61c/diff:/var/lib/docker/overlay2/cafd9c31263a2dd59718bf33194ee108ff2cb04ebe88da0f3b3075c86eecb290/diff:/var/lib/docker/overlay2/5a3fc86875a53ae2276ef1730f3b687652c07186573ae7089e84af1a2fd1da5e/diff:/var/lib/docker/overlay2/78d1206897017a1ee2983b8dc9747b6ffd1a73fa6fa5628f14b96793d4ffed51/diff:/var/lib/docker/overlay2/cd737964a0abf017c8a3dd052b56c31fbe465e55076b34484140c2492eab424e/diff:/var/lib/docker/overlay2/8b02f7e5ffdacccb5e40e789266be8c31d6c9005abbbe17242a230ebd7308799/diff:/var/lib/docker/overlay2/c168f283b555c193d448bd26f0733e8742770578e8ef350338634c663fdec6a8/diff:/var/lib/docker/overlay2/520ceca20125bf29f608fb18d9dbba7adaafad3e241e87064ec5856c27f4c271/diff:/var/lib/docker/overlay2/2e333694e543acf6961736ff91d5c670ed92071da339d43fcc9bbd9d28e6d369/diff:/var/lib/d
ocker/overlay2/a5a64984b612987ad9eb98efcddacb5d12fede3ff92d8324ffb45875d996df9a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5616b246656dd5d6b6e3f992a80cd4718a0252a95e9897167627eb78844a7a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5616b246656dd5d6b6e3f992a80cd4718a0252a95e9897167627eb78844a7a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5616b246656dd5d6b6e3f992a80cd4718a0252a95e9897167627eb78844a7a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220629113205-24356",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220629113205-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220629113205-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220629113205-24356",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220629113205-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afc2b23fecf88875b43c6f4e06387a1d5d85038fc2084af6e84170ddef25c8a2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55956"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55957"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55958"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/afc2b23fecf8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b4c5617f768bc33b0de5011006ee39336601c3d531616fdc1ad36ff23f6d3ba6",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "9c9c31cb50892651628ac3f665ecb74b34b04b1d52a900a1fe279edf900c294c",
	                    "EndpointID": "b4c5617f768bc33b0de5011006ee39336601c3d531616fdc1ad36ff23f6d3ba6",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220629113205-24356 -n running-upgrade-20220629113205-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220629113205-24356 -n running-upgrade-20220629113205-24356: exit status 6 (425.656631ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:33:15.018003   34686 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220629113205-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220629113205-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220629113205-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220629113205-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220629113205-24356: (2.432516592s)
--- FAIL: TestRunningBinaryUpgrade (72.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (583.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0629 11:34:24.334908   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:24.340022   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:24.350103   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:24.370760   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:24.411471   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:24.491610   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:24.652924   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:24.973452   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:25.613755   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:26.894168   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:29.454815   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:34.576460   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:34:44.817105   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m14.337095021s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220629113407-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220629113407-24356 in cluster kubernetes-upgrade-20220629113407-24356
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:34:07.340509   35053 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:34:07.340666   35053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:34:07.340672   35053 out.go:309] Setting ErrFile to fd 2...
	I0629 11:34:07.340675   35053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:34:07.340995   35053 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:34:07.341313   35053 out.go:303] Setting JSON to false
	I0629 11:34:07.356345   35053 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9215,"bootTime":1656518432,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:34:07.356440   35053 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:34:07.378920   35053 out.go:177] * [kubernetes-upgrade-20220629113407-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:34:07.421505   35053 notify.go:193] Checking for updates...
	I0629 11:34:07.442593   35053 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:34:07.464591   35053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:34:07.485836   35053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:34:07.507810   35053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:34:07.529844   35053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:34:07.552348   35053 config.go:178] Loaded profile config "cert-expiration-20220629113118-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:34:07.552441   35053 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:34:07.622716   35053 docker.go:137] docker version: linux-20.10.16
	I0629 11:34:07.622853   35053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:34:07.744448   35053 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:34:07.688824834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:34:07.766288   35053 out.go:177] * Using the docker driver based on user configuration
	I0629 11:34:07.787049   35053 start.go:284] selected driver: docker
	I0629 11:34:07.787084   35053 start.go:808] validating driver "docker" against <nil>
	I0629 11:34:07.787113   35053 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:34:07.790594   35053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:34:07.912652   35053 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:34:07.856743058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:34:07.912805   35053 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 11:34:07.912951   35053 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0629 11:34:07.934807   35053 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 11:34:07.956910   35053 cni.go:95] Creating CNI manager for ""
	I0629 11:34:07.956939   35053 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:34:07.956953   35053 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220629113407-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220629113407-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:34:07.978714   35053 out.go:177] * Starting control plane node kubernetes-upgrade-20220629113407-24356 in cluster kubernetes-upgrade-20220629113407-24356
	I0629 11:34:08.022745   35053 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:34:08.045575   35053 out.go:177] * Pulling base image ...
	I0629 11:34:08.089019   35053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:34:08.089007   35053 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:34:08.154046   35053 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:34:08.154113   35053 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:34:08.158467   35053 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 11:34:08.158483   35053 cache.go:57] Caching tarball of preloaded images
	I0629 11:34:08.158682   35053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:34:08.207341   35053 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0629 11:34:08.228360   35053 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0629 11:34:08.321943   35053 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 11:34:12.374561   35053 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0629 11:34:12.374786   35053 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0629 11:34:12.922338   35053 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0629 11:34:12.922433   35053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/config.json ...
	I0629 11:34:12.922456   35053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/config.json: {Name:mkf53c826d58f71a6ac0e70564963c8f527189d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:34:12.922706   35053 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:34:12.922734   35053 start.go:352] acquiring machines lock for kubernetes-upgrade-20220629113407-24356: {Name:mkc74a80cdb36272141051e347a92a2de37814fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:34:12.922824   35053 start.go:356] acquired machines lock for "kubernetes-upgrade-20220629113407-24356" in 82.451µs
	I0629 11:34:12.922847   35053 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220629113407-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-2022062911340
7-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:34:12.922886   35053 start.go:131] createHost starting for "" (driver="docker")
	I0629 11:34:12.965787   35053 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0629 11:34:12.966043   35053 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220629113407-24356" (driver="docker")
	I0629 11:34:12.966075   35053 client.go:168] LocalClient.Create starting
	I0629 11:34:12.966175   35053 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem
	I0629 11:34:12.966221   35053 main.go:134] libmachine: Decoding PEM data...
	I0629 11:34:12.966241   35053 main.go:134] libmachine: Parsing certificate...
	I0629 11:34:12.966298   35053 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem
	I0629 11:34:12.966332   35053 main.go:134] libmachine: Decoding PEM data...
	I0629 11:34:12.966346   35053 main.go:134] libmachine: Parsing certificate...
	I0629 11:34:12.966939   35053 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220629113407-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 11:34:13.031545   35053 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220629113407-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 11:34:13.031622   35053 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220629113407-24356] to gather additional debugging logs...
	I0629 11:34:13.031642   35053 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220629113407-24356
	W0629 11:34:13.093358   35053 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220629113407-24356 returned with exit code 1
	I0629 11:34:13.093384   35053 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220629113407-24356]: docker network inspect kubernetes-upgrade-20220629113407-24356: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220629113407-24356
	I0629 11:34:13.093425   35053 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220629113407-24356]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220629113407-24356
	
	** /stderr **
	I0629 11:34:13.093538   35053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 11:34:13.156328   35053 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000becfb0] misses:0}
	I0629 11:34:13.156361   35053 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:34:13.156376   35053 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220629113407-24356 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 11:34:13.156448   35053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 kubernetes-upgrade-20220629113407-24356
	W0629 11:34:13.219265   35053 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 kubernetes-upgrade-20220629113407-24356 returned with exit code 1
	W0629 11:34:13.219302   35053 network_create.go:107] failed to create docker network kubernetes-upgrade-20220629113407-24356 192.168.49.0/24, will retry: subnet is taken
	I0629 11:34:13.219730   35053 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000becfb0] amended:false}} dirty:map[] misses:0}
	I0629 11:34:13.219750   35053 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:34:13.219983   35053 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000becfb0] amended:true}} dirty:map[192.168.49.0:0xc000becfb0 192.168.58.0:0xc00051e428] misses:0}
	I0629 11:34:13.219999   35053 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:34:13.220008   35053 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220629113407-24356 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 11:34:13.220073   35053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 kubernetes-upgrade-20220629113407-24356
	W0629 11:34:13.282372   35053 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 kubernetes-upgrade-20220629113407-24356 returned with exit code 1
	W0629 11:34:13.282426   35053 network_create.go:107] failed to create docker network kubernetes-upgrade-20220629113407-24356 192.168.58.0/24, will retry: subnet is taken
	I0629 11:34:13.282695   35053 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000becfb0] amended:true}} dirty:map[192.168.49.0:0xc000becfb0 192.168.58.0:0xc00051e428] misses:1}
	I0629 11:34:13.282719   35053 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:34:13.282940   35053 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000becfb0] amended:true}} dirty:map[192.168.49.0:0xc000becfb0 192.168.58.0:0xc00051e428 192.168.67.0:0xc000becfe8] misses:1}
	I0629 11:34:13.282965   35053 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:34:13.282974   35053 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220629113407-24356 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 11:34:13.283039   35053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 kubernetes-upgrade-20220629113407-24356
	W0629 11:34:13.344784   35053 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 kubernetes-upgrade-20220629113407-24356 returned with exit code 1
	W0629 11:34:13.344921   35053 network_create.go:107] failed to create docker network kubernetes-upgrade-20220629113407-24356 192.168.67.0/24, will retry: subnet is taken
	I0629 11:34:13.345193   35053 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000becfb0] amended:true}} dirty:map[192.168.49.0:0xc000becfb0 192.168.58.0:0xc00051e428 192.168.67.0:0xc000becfe8] misses:2}
	I0629 11:34:13.345210   35053 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:34:13.345449   35053 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000becfb0] amended:true}} dirty:map[192.168.49.0:0xc000becfb0 192.168.58.0:0xc00051e428 192.168.67.0:0xc000becfe8 192.168.76.0:0xc00051e460] misses:2}
	I0629 11:34:13.345464   35053 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:34:13.345471   35053 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220629113407-24356 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0629 11:34:13.345533   35053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 kubernetes-upgrade-20220629113407-24356
	I0629 11:34:13.439581   35053 network_create.go:99] docker network kubernetes-upgrade-20220629113407-24356 192.168.76.0/24 created
	I0629 11:34:13.439619   35053 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-20220629113407-24356" container
	I0629 11:34:13.439725   35053 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 11:34:13.506926   35053 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220629113407-24356 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 --label created_by.minikube.sigs.k8s.io=true
	I0629 11:34:13.570903   35053 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220629113407-24356
	I0629 11:34:13.571103   35053 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220629113407-24356-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220629113407-24356:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 11:34:14.009775   35053 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220629113407-24356
	I0629 11:34:14.009975   35053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:34:14.009991   35053 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 11:34:14.010076   35053 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220629113407-24356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 11:34:17.908895   35053 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220629113407-24356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (3.898637857s)
	I0629 11:34:17.909080   35053 kic.go:188] duration metric: took 3.898997 seconds to extract preloaded images to volume
	I0629 11:34:17.909183   35053 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 11:34:18.034130   35053 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220629113407-24356 --name kubernetes-upgrade-20220629113407-24356 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220629113407-24356 --network kubernetes-upgrade-20220629113407-24356 --ip 192.168.76.2 --volume kubernetes-upgrade-20220629113407-24356:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 11:34:18.434559   35053 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Running}}
	I0629 11:34:18.508786   35053 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Status}}
	I0629 11:34:18.587840   35053 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220629113407-24356 stat /var/lib/dpkg/alternatives/iptables
	I0629 11:34:18.720401   35053 oci.go:144] the created container "kubernetes-upgrade-20220629113407-24356" has a running status.
	I0629 11:34:18.720545   35053 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa...
	I0629 11:34:18.952907   35053 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 11:34:19.067097   35053 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Status}}
	I0629 11:34:19.137776   35053 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 11:34:19.137871   35053 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220629113407-24356 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 11:34:19.261927   35053 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Status}}
	I0629 11:34:19.332584   35053 machine.go:88] provisioning docker machine ...
	I0629 11:34:19.332636   35053 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220629113407-24356"
	I0629 11:34:19.332759   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:19.405727   35053 main.go:134] libmachine: Using SSH client type: native
	I0629 11:34:19.405914   35053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56445 <nil> <nil>}
	I0629 11:34:19.405940   35053 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220629113407-24356 && echo "kubernetes-upgrade-20220629113407-24356" | sudo tee /etc/hostname
	I0629 11:34:19.530766   35053 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220629113407-24356
	
	I0629 11:34:19.530865   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:19.604692   35053 main.go:134] libmachine: Using SSH client type: native
	I0629 11:34:19.604859   35053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56445 <nil> <nil>}
	I0629 11:34:19.604877   35053 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220629113407-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220629113407-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220629113407-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:34:19.724040   35053 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:34:19.724061   35053 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:34:19.724083   35053 ubuntu.go:177] setting up certificates
	I0629 11:34:19.724090   35053 provision.go:83] configureAuth start
	I0629 11:34:19.724151   35053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:19.796114   35053 provision.go:138] copyHostCerts
	I0629 11:34:19.796187   35053 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:34:19.796195   35053 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:34:19.796290   35053 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:34:19.796474   35053 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:34:19.796505   35053 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:34:19.796565   35053 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:34:19.796707   35053 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:34:19.796712   35053 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:34:19.796769   35053 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:34:19.796882   35053 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220629113407-24356 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220629113407-24356]
	I0629 11:34:19.919628   35053 provision.go:172] copyRemoteCerts
	I0629 11:34:19.919685   35053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:34:19.919738   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:19.990448   35053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56445 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:34:20.078972   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:34:20.096638   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0629 11:34:20.113071   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 11:34:20.129968   35053 provision.go:86] duration metric: configureAuth took 405.855947ms
	I0629 11:34:20.129981   35053 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:34:20.130113   35053 config.go:178] Loaded profile config "kubernetes-upgrade-20220629113407-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:34:20.130166   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:20.201169   35053 main.go:134] libmachine: Using SSH client type: native
	I0629 11:34:20.201331   35053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56445 <nil> <nil>}
	I0629 11:34:20.201346   35053 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:34:20.319813   35053 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:34:20.319827   35053 ubuntu.go:71] root file system type: overlay
	I0629 11:34:20.319981   35053 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:34:20.320064   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:20.390767   35053 main.go:134] libmachine: Using SSH client type: native
	I0629 11:34:20.390932   35053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56445 <nil> <nil>}
	I0629 11:34:20.390983   35053 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:34:20.517763   35053 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:34:20.517853   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:20.592187   35053 main.go:134] libmachine: Using SSH client type: native
	I0629 11:34:20.592354   35053 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56445 <nil> <nil>}
	I0629 11:34:20.592367   35053 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:34:21.197989   35053 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:34:20.532250446 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0629 11:34:21.198009   35053 machine.go:91] provisioned docker machine in 1.865361876s
	I0629 11:34:21.198014   35053 client.go:171] LocalClient.Create took 8.231741685s
	I0629 11:34:21.198033   35053 start.go:173] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220629113407-24356" took 8.231796173s
	I0629 11:34:21.198042   35053 start.go:306] post-start starting for "kubernetes-upgrade-20220629113407-24356" (driver="docker")
	I0629 11:34:21.198046   35053 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:34:21.198120   35053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:34:21.198169   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:21.269650   35053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56445 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:34:21.355958   35053 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:34:21.359614   35053 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:34:21.359629   35053 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:34:21.359636   35053 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:34:21.359641   35053 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:34:21.359650   35053 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:34:21.359754   35053 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:34:21.359911   35053 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:34:21.360060   35053 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:34:21.367390   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:34:21.385033   35053 start.go:309] post-start completed in 186.978553ms
	I0629 11:34:21.385553   35053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:21.456769   35053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/config.json ...
	I0629 11:34:21.457254   35053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:34:21.457310   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:21.528624   35053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56445 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:34:21.616345   35053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:34:21.620761   35053 start.go:134] duration metric: createHost completed in 8.697661462s
	I0629 11:34:21.620781   35053 start.go:81] releasing machines lock for "kubernetes-upgrade-20220629113407-24356", held for 8.697744606s
	I0629 11:34:21.620868   35053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:21.692076   35053 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:34:21.692076   35053 ssh_runner.go:195] Run: systemctl --version
	I0629 11:34:21.692153   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:21.692159   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:21.768930   35053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56445 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:34:21.771913   35053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56445 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:34:22.342680   35053 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:34:22.378301   35053 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:34:22.378358   35053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:34:22.387524   35053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:34:22.400289   35053 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:34:22.464859   35053 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:34:22.528577   35053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:34:22.598791   35053 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:34:22.807477   35053 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:34:22.842693   35053 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:34:22.898996   35053 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0629 11:34:22.899063   35053 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220629113407-24356 dig +short host.docker.internal
	I0629 11:34:23.042610   35053 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:34:23.042910   35053 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:34:23.047155   35053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:34:23.056444   35053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:34:23.127637   35053 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:34:23.127706   35053 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:34:23.157615   35053 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:34:23.157632   35053 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:34:23.157721   35053 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:34:23.187560   35053 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:34:23.187577   35053 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:34:23.187651   35053 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:34:23.262478   35053 cni.go:95] Creating CNI manager for ""
	I0629 11:34:23.262492   35053 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:34:23.262503   35053 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:34:23.262519   35053 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220629113407-24356 NodeName:kubernetes-upgrade-20220629113407-24356 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:34:23.262647   35053 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220629113407-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220629113407-24356
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:34:23.262715   35053 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220629113407-24356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220629113407-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:34:23.262779   35053 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0629 11:34:23.270339   35053 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:34:23.270391   35053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:34:23.277375   35053 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0629 11:34:23.290390   35053 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:34:23.304646   35053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0629 11:34:23.317870   35053 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:34:23.321359   35053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:34:23.331419   35053 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356 for IP: 192.168.76.2
	I0629 11:34:23.331555   35053 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:34:23.331608   35053 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:34:23.331652   35053 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.key
	I0629 11:34:23.331666   35053 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.crt with IP's: []
	I0629 11:34:23.508578   35053 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.crt ...
	I0629 11:34:23.508594   35053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.crt: {Name:mk920955a19c3afd4679e2aa6b44c1acddade82d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:34:23.508911   35053 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.key ...
	I0629 11:34:23.508926   35053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.key: {Name:mkc85c2939d8cac7a934bc38b5dbc3a84e87e88a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:34:23.509130   35053 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key.31bdca25
	I0629 11:34:23.509153   35053 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0629 11:34:23.625299   35053 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.crt.31bdca25 ...
	I0629 11:34:23.625309   35053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.crt.31bdca25: {Name:mk160140dd9651c54e40b1fec53c24ed8fe3b401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:34:23.625537   35053 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key.31bdca25 ...
	I0629 11:34:23.625544   35053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key.31bdca25: {Name:mkd317d8ed3797812045a4f276a16660594262cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:34:23.625718   35053 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.crt
	I0629 11:34:23.625852   35053 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key
	I0629 11:34:23.625992   35053 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.key
	I0629 11:34:23.626006   35053 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.crt with IP's: []
	I0629 11:34:24.007991   35053 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.crt ...
	I0629 11:34:24.008008   35053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.crt: {Name:mkfe5194acc3a226444a1118b4f2283bdd638e72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:34:24.008307   35053 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.key ...
	I0629 11:34:24.008315   35053 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.key: {Name:mk3973eea093ecb535cbe6cffb80e5f15782c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:34:24.008688   35053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:34:24.008729   35053 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:34:24.008739   35053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:34:24.008773   35053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:34:24.008803   35053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:34:24.008834   35053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:34:24.008896   35053 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:34:24.009438   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:34:24.027753   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 11:34:24.045225   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:34:24.062558   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:34:24.079527   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:34:24.097673   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:34:24.114397   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:34:24.130766   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:34:24.147610   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:34:24.165381   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:34:24.182024   35053 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:34:24.198989   35053 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:34:24.211996   35053 ssh_runner.go:195] Run: openssl version
	I0629 11:34:24.217514   35053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:34:24.225613   35053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:34:24.229445   35053 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:34:24.229486   35053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:34:24.234324   35053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:34:24.241858   35053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:34:24.249589   35053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:34:24.253850   35053 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:34:24.253907   35053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:34:24.259242   35053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:34:24.266880   35053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:34:24.274664   35053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:34:24.278474   35053 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:34:24.278520   35053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:34:24.283673   35053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:34:24.291364   35053 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220629113407-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220629113407-24356 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
}
	I0629 11:34:24.291471   35053 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:34:24.319138   35053 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:34:24.326804   35053 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:34:24.334102   35053 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:34:24.334164   35053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:34:24.341522   35053 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:34:24.341545   35053 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:34:25.095658   35053 out.go:204]   - Generating certificates and keys ...
	I0629 11:34:27.478230   35053 out.go:204]   - Booting up control plane ...
	W0629 11:36:22.416237   35053 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220629113407-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220629113407-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220629113407-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220629113407-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:36:22.416272   35053 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:36:22.838147   35053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:36:22.847366   35053 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:36:22.847423   35053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:36:22.854563   35053 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:36:22.854581   35053 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:36:23.622205   35053 out.go:204]   - Generating certificates and keys ...
	I0629 11:36:24.111280   35053 out.go:204]   - Booting up control plane ...
	I0629 11:38:19.014499   35053 kubeadm.go:397] StartCluster complete in 3m54.717635321s
	I0629 11:38:19.014583   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:38:19.043372   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.043384   35053 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:38:19.043445   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:38:19.071822   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.071835   35053 logs.go:276] No container was found matching "etcd"
	I0629 11:38:19.071892   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:38:19.100804   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.100816   35053 logs.go:276] No container was found matching "coredns"
	I0629 11:38:19.100875   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:38:19.129827   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.129840   35053 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:38:19.129897   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:38:19.159259   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.159273   35053 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:38:19.159341   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:38:19.188621   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.188634   35053 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:38:19.188694   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:38:19.217845   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.217859   35053 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:38:19.217919   35053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:38:19.247195   35053 logs.go:274] 0 containers: []
	W0629 11:38:19.247209   35053 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:38:19.247217   35053 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:38:19.247230   35053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:38:19.299520   35053 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:38:19.299530   35053 logs.go:123] Gathering logs for Docker ...
	I0629 11:38:19.299537   35053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:38:19.315068   35053 logs.go:123] Gathering logs for container status ...
	I0629 11:38:19.315080   35053 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:38:21.367244   35053 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052103192s)
	I0629 11:38:21.367383   35053 logs.go:123] Gathering logs for kubelet ...
	I0629 11:38:21.367390   35053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:38:21.407076   35053 logs.go:123] Gathering logs for dmesg ...
	I0629 11:38:21.407089   35053 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0629 11:38:21.420363   35053 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 11:38:21.420380   35053 out.go:239] * 
	* 
	W0629 11:38:21.420500   35053 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:38:21.420514   35053 out.go:239] * 
	* 
	W0629 11:38:21.421091   35053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 11:38:21.484927   35053 out.go:177] 
	W0629 11:38:21.528127   35053 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:38:21.528351   35053 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 11:38:21.528471   35053 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 11:38:21.570050   35053 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220629113407-24356
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220629113407-24356: (1.655718948s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220629113407-24356 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220629113407-24356 status --format={{.Host}}: exit status 7 (117.678411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker : (4m38.817368395s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220629113407-24356 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (459.130455ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220629113407-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.24.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220629113407-24356
	    minikube start -p kubernetes-upgrade-20220629113407-24356 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220629113407-243562 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.24.2, by running:
	    
	    minikube start -p kubernetes-upgrade-20220629113407-24356 --kubernetes-version=v1.24.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220629113407-24356 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker : (40.230004884s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-06-29 11:43:43.032456 -0700 PDT m=+3089.447288138
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220629113407-24356
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220629113407-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3f79931f1ecdd2bc630a61ab3cff74b53132ad3d36c6de41bc924a05e6cca74",
	        "Created": "2022-06-29T18:34:18.11417066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 162012,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:38:25.179204966Z",
	            "FinishedAt": "2022-06-29T18:38:22.161025047Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/a3f79931f1ecdd2bc630a61ab3cff74b53132ad3d36c6de41bc924a05e6cca74/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3f79931f1ecdd2bc630a61ab3cff74b53132ad3d36c6de41bc924a05e6cca74/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3f79931f1ecdd2bc630a61ab3cff74b53132ad3d36c6de41bc924a05e6cca74/hosts",
	        "LogPath": "/var/lib/docker/containers/a3f79931f1ecdd2bc630a61ab3cff74b53132ad3d36c6de41bc924a05e6cca74/a3f79931f1ecdd2bc630a61ab3cff74b53132ad3d36c6de41bc924a05e6cca74-json.log",
	        "Name": "/kubernetes-upgrade-20220629113407-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-20220629113407-24356:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220629113407-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9df465cd3a8f7492c36020abefd6f926835a90bfe6df07814c82ced16ba899ef-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9df465cd3a8f7492c36020abefd6f926835a90bfe6df07814c82ced16ba899ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9df465cd3a8f7492c36020abefd6f926835a90bfe6df07814c82ced16ba899ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9df465cd3a8f7492c36020abefd6f926835a90bfe6df07814c82ced16ba899ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220629113407-24356",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220629113407-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220629113407-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220629113407-24356",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220629113407-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f7d6b526ac013d58453acef3b920f4379f491703e2b9725c5a75b0900c8ef06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57166"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57167"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57168"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57169"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57170"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2f7d6b526ac0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220629113407-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a3f79931f1ec",
	                        "kubernetes-upgrade-20220629113407-24356"
	                    ],
	                    "NetworkID": "0103a949465f2bddecd388a9f8b8f9ff0c78f4dbbba53d17110003892dd6a590",
	                    "EndpointID": "2a2351db5b178c633adefbf7a8bd32305da9a75f6e4a535d45bae6d35109dbcf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220629113407-24356 -n kubernetes-upgrade-20220629113407-24356
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220629113407-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220629113407-24356 logs -n 25: (2.554965305s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|----------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-20220629113612-24356           | minikube | jenkins | v1.26.0 | 29 Jun 22 11:37 PDT | 29 Jun 22 11:37 PDT |
	|         | --alsologtostderr -v=5                  |          |         |         |                     |                     |
	| stop    | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:38 PDT | 29 Jun 22 11:38 PDT |
	|         | kubernetes-upgrade-20220629113407-24356 |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:38 PDT | 29 Jun 22 11:43 PDT |
	|         | kubernetes-upgrade-20220629113407-24356 |          |         |         |                     |                     |
	|         | --memory=2200                           |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |          |         |         |                     |                     |
	| delete  | -p pause-20220629113612-24356           | minikube | jenkins | v1.26.0 | 29 Jun 22 11:38 PDT | 29 Jun 22 11:38 PDT |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:38 PDT |                     |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	|         | --no-kubernetes                         |          |         |         |                     |                     |
	|         | --kubernetes-version=1.20               |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:38 PDT | 29 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |          |         |         |                     |                     |
	| ssh     | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT |                     |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet        |          |         |         |                     |                     |
	|         | service kubelet                         |          |         |         |                     |                     |
	| profile | list                                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	| profile | list --output=json                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	| stop    | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| ssh     | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT |                     |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet        |          |         |         |                     |                     |
	|         | service kubelet                         |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:39 PDT |
	|         | NoKubernetes-20220629113845-24356       |          |         |         |                     |                     |
	| start   | -p auto-20220629112950-24356            | minikube | jenkins | v1.26.0 | 29 Jun 22 11:39 PDT | 29 Jun 22 11:40 PDT |
	|         | --memory=2048                           |          |         |         |                     |                     |
	|         | --alsologtostderr                       |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| ssh     | -p auto-20220629112950-24356            | minikube | jenkins | v1.26.0 | 29 Jun 22 11:40 PDT | 29 Jun 22 11:40 PDT |
	|         | pgrep -a kubelet                        |          |         |         |                     |                     |
	| delete  | -p auto-20220629112950-24356            | minikube | jenkins | v1.26.0 | 29 Jun 22 11:41 PDT | 29 Jun 22 11:41 PDT |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:41 PDT | 29 Jun 22 11:42 PDT |
	|         | kindnet-20220629112951-24356            |          |         |         |                     |                     |
	|         | --memory=2048                           |          |         |         |                     |                     |
	|         | --alsologtostderr                       |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |          |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker           |          |         |         |                     |                     |
	| ssh     | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:42 PDT | 29 Jun 22 11:42 PDT |
	|         | kindnet-20220629112951-24356            |          |         |         |                     |                     |
	|         | pgrep -a kubelet                        |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:42 PDT | 29 Jun 22 11:42 PDT |
	|         | kindnet-20220629112951-24356            |          |         |         |                     |                     |
	| start   | -p cilium-20220629112951-24356          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:42 PDT |                     |
	|         | --memory=2048                           |          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true           |          |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium          |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:43 PDT |                     |
	|         | kubernetes-upgrade-20220629113407-24356 |          |         |         |                     |                     |
	|         | --memory=2200                           |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:43 PDT | 29 Jun 22 11:43 PDT |
	|         | kubernetes-upgrade-20220629113407-24356 |          |         |         |                     |                     |
	|         | --memory=2200                           |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |          |         |         |                     |                     |
	|---------|-----------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 11:43:02
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 11:43:02.852763   37330 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:43:02.852918   37330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:43:02.852924   37330 out.go:309] Setting ErrFile to fd 2...
	I0629 11:43:02.852929   37330 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:43:02.853242   37330 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:43:02.853499   37330 out.go:303] Setting JSON to false
	I0629 11:43:02.869561   37330 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9750,"bootTime":1656518432,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:43:02.869711   37330 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:43:02.892227   37330 out.go:177] * [kubernetes-upgrade-20220629113407-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:43:02.950739   37330 notify.go:193] Checking for updates...
	I0629 11:43:02.977278   37330 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:43:03.062797   37330 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:43:03.142614   37330 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:43:03.205260   37330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:43:03.230750   37330 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:43:03.253087   37330 config.go:178] Loaded profile config "kubernetes-upgrade-20220629113407-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:43:03.253681   37330 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:43:03.325865   37330 docker.go:137] docker version: linux-20.10.16
	I0629 11:43:03.325994   37330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:43:03.452252   37330 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-29 18:43:03.391183856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:43:03.496039   37330 out.go:177] * Using the docker driver based on existing profile
	I0629 11:43:03.516889   37330 start.go:284] selected driver: docker
	I0629 11:43:03.516918   37330 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220629113407-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220629113407-
24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:43:03.517075   37330 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:43:03.520511   37330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:43:03.642617   37330 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-29 18:43:03.584538029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:43:03.642754   37330 cni.go:95] Creating CNI manager for ""
	I0629 11:43:03.642765   37330 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:43:03.642776   37330 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220629113407-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220629113407-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:43:03.664899   37330 out.go:177] * Starting control plane node kubernetes-upgrade-20220629113407-24356 in cluster kubernetes-upgrade-20220629113407-24356
	I0629 11:43:03.686653   37330 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:43:03.708461   37330 out.go:177] * Pulling base image ...
	I0629 11:43:03.783769   37330 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:43:03.783870   37330 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:43:03.783879   37330 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 11:43:03.783907   37330 cache.go:57] Caching tarball of preloaded images
	I0629 11:43:03.784113   37330 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:43:03.784136   37330 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 11:43:03.785126   37330 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/config.json ...
	I0629 11:43:03.850504   37330 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:43:03.850520   37330 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:43:03.850532   37330 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:43:03.850586   37330 start.go:352] acquiring machines lock for kubernetes-upgrade-20220629113407-24356: {Name:mkc74a80cdb36272141051e347a92a2de37814fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:43:03.850664   37330 start.go:356] acquired machines lock for "kubernetes-upgrade-20220629113407-24356" in 61.03µs
	I0629 11:43:03.850687   37330 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:43:03.850695   37330 fix.go:55] fixHost starting: 
	I0629 11:43:03.850928   37330 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Status}}
	I0629 11:43:03.925004   37330 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220629113407-24356: state=Running err=<nil>
	W0629 11:43:03.925031   37330 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:43:03.988667   37330 out.go:177] * Updating the running docker "kubernetes-upgrade-20220629113407-24356" container ...
	I0629 11:43:00.325021   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:00.827091   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:01.326388   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:01.826936   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:02.326525   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:02.825141   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:03.325197   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:03.827039   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:04.325075   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:04.825614   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:05.325608   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:05.826120   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:06.325183   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:06.825419   37172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:43:06.908684   37172 kubeadm.go:1045] duration metric: took 10.215761836s to wait for elevateKubeSystemPrivileges.
	I0629 11:43:06.908699   37172 kubeadm.go:397] StartCluster complete in 28.863189774s
	I0629 11:43:06.908714   37172 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:43:06.908785   37172 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:43:06.909517   37172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:43:07.425048   37172 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220629112951-24356" rescaled to 1
	I0629 11:43:07.425080   37172 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:43:07.425104   37172 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 11:43:07.466117   37172 out.go:177] * Verifying Kubernetes components...
	I0629 11:43:07.425123   37172 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0629 11:43:07.425269   37172 config.go:178] Loaded profile config "cilium-20220629112951-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:43:07.523054   37172 addons.go:65] Setting default-storageclass=true in profile "cilium-20220629112951-24356"
	I0629 11:43:07.523054   37172 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220629112951-24356"
	I0629 11:43:07.523061   37172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:43:07.523077   37172 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220629112951-24356"
	W0629 11:43:07.523083   37172 addons.go:162] addon storage-provisioner should already be in state true
	I0629 11:43:07.523090   37172 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220629112951-24356"
	I0629 11:43:07.523134   37172 host.go:66] Checking if "cilium-20220629112951-24356" exists ...
	I0629 11:43:07.523368   37172 cli_runner.go:164] Run: docker container inspect cilium-20220629112951-24356 --format={{.State.Status}}
	I0629 11:43:07.523480   37172 cli_runner.go:164] Run: docker container inspect cilium-20220629112951-24356 --format={{.State.Status}}
	I0629 11:43:07.541961   37172 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 11:43:07.555444   37172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220629112951-24356
	I0629 11:43:07.640437   37172 addons.go:153] Setting addon default-storageclass=true in "cilium-20220629112951-24356"
	W0629 11:43:07.652104   37172 addons.go:162] addon default-storageclass should already be in state true
	I0629 11:43:07.652055   37172 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 11:43:04.010443   37330 machine.go:88] provisioning docker machine ...
	I0629 11:43:04.010503   37330 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220629113407-24356"
	I0629 11:43:04.010641   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:04.084375   37330 main.go:134] libmachine: Using SSH client type: native
	I0629 11:43:04.084569   37330 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 57166 <nil> <nil>}
	I0629 11:43:04.084582   37330 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220629113407-24356 && echo "kubernetes-upgrade-20220629113407-24356" | sudo tee /etc/hostname
	I0629 11:43:04.209770   37330 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220629113407-24356
	
	I0629 11:43:04.209843   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:04.282826   37330 main.go:134] libmachine: Using SSH client type: native
	I0629 11:43:04.282969   37330 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 57166 <nil> <nil>}
	I0629 11:43:04.282984   37330 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220629113407-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220629113407-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220629113407-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:43:04.402375   37330 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:43:04.402399   37330 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:43:04.402433   37330 ubuntu.go:177] setting up certificates
	I0629 11:43:04.402446   37330 provision.go:83] configureAuth start
	I0629 11:43:04.402503   37330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:04.473636   37330 provision.go:138] copyHostCerts
	I0629 11:43:04.473874   37330 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:43:04.473883   37330 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:43:04.473987   37330 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:43:04.474214   37330 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:43:04.474223   37330 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:43:04.474281   37330 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:43:04.474425   37330 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:43:04.474431   37330 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:43:04.474488   37330 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:43:04.474595   37330 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220629113407-24356 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220629113407-24356]
	I0629 11:43:04.550892   37330 provision.go:172] copyRemoteCerts
	I0629 11:43:04.550961   37330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:43:04.551014   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:04.624425   37330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57166 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:43:04.709700   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 11:43:04.727543   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:43:04.746255   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0629 11:43:04.763585   37330 provision.go:86] duration metric: configureAuth took 361.117342ms
	I0629 11:43:04.763597   37330 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:43:04.763739   37330 config.go:178] Loaded profile config "kubernetes-upgrade-20220629113407-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:43:04.763811   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:04.836906   37330 main.go:134] libmachine: Using SSH client type: native
	I0629 11:43:04.837069   37330 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 57166 <nil> <nil>}
	I0629 11:43:04.837081   37330 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:43:04.957170   37330 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:43:04.957197   37330 ubuntu.go:71] root file system type: overlay
	I0629 11:43:04.957339   37330 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:43:04.957413   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:05.029266   37330 main.go:134] libmachine: Using SSH client type: native
	I0629 11:43:05.029413   37330 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 57166 <nil> <nil>}
	I0629 11:43:05.029462   37330 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:43:05.159666   37330 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:43:05.159752   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:05.232274   37330 main.go:134] libmachine: Using SSH client type: native
	I0629 11:43:05.232465   37330 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 57166 <nil> <nil>}
	I0629 11:43:05.232478   37330 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:43:05.352455   37330 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:43:05.352467   37330 machine.go:91] provisioned docker machine in 1.341973655s
	I0629 11:43:05.352479   37330 start.go:306] post-start starting for "kubernetes-upgrade-20220629113407-24356" (driver="docker")
	I0629 11:43:05.352484   37330 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:43:05.352549   37330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:43:05.352607   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:05.428036   37330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57166 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:43:05.514166   37330 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:43:05.517593   37330 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:43:05.517621   37330 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:43:05.517628   37330 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:43:05.517633   37330 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:43:05.517644   37330 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:43:05.517748   37330 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:43:05.517876   37330 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:43:05.518029   37330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:43:05.525388   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:43:05.543404   37330 start.go:309] post-start completed in 190.911436ms
	I0629 11:43:05.543480   37330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:43:05.543529   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:05.614803   37330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57166 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:43:05.698723   37330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:43:05.703569   37330 fix.go:57] fixHost completed within 1.852830179s
	I0629 11:43:05.703579   37330 start.go:81] releasing machines lock for "kubernetes-upgrade-20220629113407-24356", held for 1.852865004s
	I0629 11:43:05.703644   37330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:05.775076   37330 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:43:05.775141   37330 ssh_runner.go:195] Run: systemctl --version
	I0629 11:43:05.775185   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:05.775190   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:05.855928   37330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57166 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:43:05.857814   37330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57166 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:43:06.424590   37330 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:43:06.437960   37330 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:43:06.438039   37330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:43:06.449684   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:43:06.463456   37330 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:43:06.554407   37330 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:43:06.651825   37330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:43:06.744600   37330 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:43:07.652139   37172 host.go:66] Checking if "cilium-20220629112951-24356" exists ...
	I0629 11:43:07.672027   37172 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:43:07.672046   37172 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 11:43:07.672142   37172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629112951-24356
	I0629 11:43:07.673280   37172 cli_runner.go:164] Run: docker container inspect cilium-20220629112951-24356 --format={{.State.Status}}
	I0629 11:43:07.693922   37172 node_ready.go:35] waiting up to 5m0s for node "cilium-20220629112951-24356" to be "Ready" ...
	I0629 11:43:07.700818   37172 node_ready.go:49] node "cilium-20220629112951-24356" has status "Ready":"True"
	I0629 11:43:07.700830   37172 node_ready.go:38] duration metric: took 6.872415ms waiting for node "cilium-20220629112951-24356" to be "Ready" ...
	I0629 11:43:07.700840   37172 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:43:07.711240   37172 pod_ready.go:78] waiting up to 5m0s for pod "cilium-lclt2" in "kube-system" namespace to be "Ready" ...
	I0629 11:43:07.773099   37172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58311 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/cilium-20220629112951-24356/id_rsa Username:docker}
	I0629 11:43:07.776424   37172 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 11:43:07.776445   37172 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 11:43:07.776521   37172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220629112951-24356
	I0629 11:43:07.842167   37172 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 11:43:07.879780   37172 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:43:07.884933   37172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58311 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/cilium-20220629112951-24356/id_rsa Username:docker}
	I0629 11:43:07.981759   37172 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 11:43:08.245746   37172 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0629 11:43:08.282686   37172 addons.go:414] enableAddons completed in 857.489903ms
	I0629 11:43:09.728909   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:11.744424   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:14.226360   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:16.236338   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:18.736721   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:20.741889   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:23.225275   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:27.485946   37330 ssh_runner.go:235] Completed: sudo systemctl restart docker: (20.740830112s)
	I0629 11:43:27.486018   37330 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 11:43:27.573625   37330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:43:27.736238   37330 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 11:43:27.751015   37330 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 11:43:27.751091   37330 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 11:43:27.757150   37330 start.go:468] Will wait 60s for crictl version
	I0629 11:43:27.757222   37330 ssh_runner.go:195] Run: sudo crictl version
	I0629 11:43:27.844231   37330 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 11:43:27.844302   37330 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:43:27.962226   37330 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:43:25.228886   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:27.729671   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:28.105742   37330 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 11:43:28.105839   37330 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220629113407-24356 dig +short host.docker.internal
	I0629 11:43:28.276737   37330 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:43:28.276862   37330 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:43:28.325166   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:28.407907   37330 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:43:28.407971   37330 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:43:28.530917   37330 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0629 11:43:28.530936   37330 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:43:28.531001   37330 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:43:28.577618   37330 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	<none>:<none>
	k8s.gcr.io/coredns:1.6.2
	<none>:<none>
	
	-- /stdout --
	I0629 11:43:28.577648   37330 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:43:28.577756   37330 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:43:28.924764   37330 cni.go:95] Creating CNI manager for ""
	I0629 11:43:28.924777   37330 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:43:28.924790   37330 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:43:28.924802   37330 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220629113407-24356 NodeName:kubernetes-upgrade-20220629113407-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:system
d ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:43:28.924952   37330 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-20220629113407-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:43:28.925028   37330 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-20220629113407-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220629113407-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:43:28.925081   37330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 11:43:28.934064   37330 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:43:28.934153   37330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:43:28.942492   37330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (501 bytes)
	I0629 11:43:28.959893   37330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:43:29.035381   37330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0629 11:43:29.054359   37330 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:43:29.059881   37330 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356 for IP: 192.168.76.2
	I0629 11:43:29.060061   37330 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:43:29.060152   37330 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:43:29.060285   37330 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.key
	I0629 11:43:29.060404   37330 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key.31bdca25
	I0629 11:43:29.060478   37330 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.key
	I0629 11:43:29.060751   37330 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:43:29.060810   37330 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:43:29.060831   37330 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:43:29.060876   37330 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:43:29.060920   37330 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:43:29.060965   37330 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:43:29.061050   37330 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:43:29.061778   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:43:29.133576   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 11:43:29.155617   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:43:29.228211   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:43:29.251368   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:43:29.274658   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:43:29.342533   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:43:29.425911   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:43:29.456377   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:43:29.530828   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:43:29.557184   37330 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:43:29.630585   37330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:43:29.649610   37330 ssh_runner.go:195] Run: openssl version
	I0629 11:43:29.659545   37330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:43:29.670919   37330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:43:29.678319   37330 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:43:29.678411   37330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:43:29.726926   37330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:43:29.737317   37330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:43:29.749562   37330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:43:29.759834   37330 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:43:29.759907   37330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:43:29.767625   37330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:43:29.826829   37330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:43:29.842088   37330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:43:29.848154   37330 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:43:29.848215   37330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:43:29.858399   37330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:43:29.868956   37330 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220629113407-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220629113407-24356 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:43:29.869124   37330 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:43:29.953265   37330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:43:29.965570   37330 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:43:29.965591   37330 kubeadm.go:626] restartCluster start
	I0629 11:43:29.965678   37330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:43:29.980209   37330 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:43:29.980355   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:30.068565   37330 kubeconfig.go:92] found "kubernetes-upgrade-20220629113407-24356" server: "https://127.0.0.1:57170"
	I0629 11:43:30.069226   37330 kapi.go:59] client config for kubernetes-upgrade-20220629113407-24356: &rest.Config{Host:"https://127.0.0.1:57170", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kuber
netes-upgrade-20220629113407-24356/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fc060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 11:43:30.069874   37330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:43:30.081200   37330 api_server.go:165] Checking apiserver status ...
	I0629 11:43:30.081278   37330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:43:30.094896   37330 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/11009/cgroup
	W0629 11:43:30.105336   37330 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/11009/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:43:30.105356   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:32.153950   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:43:32.153998   37330 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:57170/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:43:32.418501   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:32.426990   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:32.427013   37330 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:57170/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:32.808809   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:32.816276   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:32.816295   37330 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:57170/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:30.229746   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:32.229999   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:34.729176   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:33.239188   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:33.244833   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:33.244852   37330 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:57170/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:33.720031   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:33.728043   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 200:
	ok
	I0629 11:43:33.740965   37330 system_pods.go:86] 5 kube-system pods found
	I0629 11:43:33.740983   37330 system_pods.go:89] "etcd-kubernetes-upgrade-20220629113407-24356" [ba82cc3a-6c78-4919-bfb5-cb8d4fefa67e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0629 11:43:33.740993   37330 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-20220629113407-24356" [7db31939-d768-4354-9572-210eef9f72be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 11:43:33.741000   37330 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-20220629113407-24356" [aed2d16f-4ed3-4104-942a-14b3f2181577] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 11:43:33.741007   37330 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-20220629113407-24356" [612d2935-2d25-4599-a8a5-ee80773f6217] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0629 11:43:33.741012   37330 system_pods.go:89] "storage-provisioner" [c8299076-1453-4284-b020-f862159135c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0629 11:43:33.741018   37330 kubeadm.go:610] needs reconfigure: missing components: kube-dns, kube-proxy
	I0629 11:43:33.741024   37330 kubeadm.go:1092] stopping kube-system containers ...
	I0629 11:43:33.741076   37330 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:43:33.772062   37330 docker.go:434] Stopping containers: [032101b0795c a31dbbf27229 ee77c48cb748 9488c5c010e9 b95c72f5637f 7bb8b2e8b233 c5dad740d5cd a01d5918eecd 247a2c61bd61 80529cda72c9 c0376af7bd5a e8329c7d7a06 2d5d4f691ef8 6d31573a2462 18975fc883b8 cfcc593f53e5 ebaf507e9ad4 fb15471d5ee2 f2152867af29]
	I0629 11:43:33.772132   37330 ssh_runner.go:195] Run: docker stop 032101b0795c a31dbbf27229 ee77c48cb748 9488c5c010e9 b95c72f5637f 7bb8b2e8b233 c5dad740d5cd a01d5918eecd 247a2c61bd61 80529cda72c9 c0376af7bd5a e8329c7d7a06 2d5d4f691ef8 6d31573a2462 18975fc883b8 cfcc593f53e5 ebaf507e9ad4 fb15471d5ee2 f2152867af29
	I0629 11:43:34.979495   37330 ssh_runner.go:235] Completed: docker stop 032101b0795c a31dbbf27229 ee77c48cb748 9488c5c010e9 b95c72f5637f 7bb8b2e8b233 c5dad740d5cd a01d5918eecd 247a2c61bd61 80529cda72c9 c0376af7bd5a e8329c7d7a06 2d5d4f691ef8 6d31573a2462 18975fc883b8 cfcc593f53e5 ebaf507e9ad4 fb15471d5ee2 f2152867af29: (1.207302432s)
	I0629 11:43:34.979572   37330 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 11:43:35.062538   37330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:43:35.071414   37330 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun 29 18:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 29 18:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2095 Jun 29 18:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 29 18:42 /etc/kubernetes/scheduler.conf
	
	I0629 11:43:35.071470   37330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 11:43:35.079934   37330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 11:43:35.129051   37330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 11:43:35.136421   37330 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:43:35.136469   37330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 11:43:35.145189   37330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 11:43:35.153820   37330 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:43:35.153893   37330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 11:43:35.163777   37330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:43:35.172546   37330 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 11:43:35.172561   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:43:35.215849   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:43:35.710428   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:43:35.907285   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:43:35.958683   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:43:36.007121   37330 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:43:36.007193   37330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:43:36.531377   37330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:43:37.031376   37330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:43:37.049622   37330 api_server.go:71] duration metric: took 1.042480057s to wait for apiserver process to appear ...
	I0629 11:43:37.049645   37330 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:43:37.049655   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:36.730385   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:39.229349   37172 pod_ready.go:102] pod "cilium-lclt2" in "kube-system" namespace has status "Ready":"False"
	I0629 11:43:40.212437   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 11:43:40.212456   37330 api_server.go:102] status: https://127.0.0.1:57170/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:43:40.714628   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:40.722366   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:43:40.722378   37330 api_server.go:102] status: https://127.0.0.1:57170/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:41.212718   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:41.218314   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:43:41.218333   37330 api_server.go:102] status: https://127.0.0.1:57170/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:43:41.713454   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:41.719491   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 200:
	ok
	I0629 11:43:41.727082   37330 api_server.go:140] control plane version: v1.24.2
	I0629 11:43:41.727097   37330 api_server.go:130] duration metric: took 4.677337247s to wait for apiserver health ...
	I0629 11:43:41.727106   37330 cni.go:95] Creating CNI manager for ""
	I0629 11:43:41.727112   37330 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:43:41.727120   37330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:43:41.733046   37330 system_pods.go:59] 5 kube-system pods found
	I0629 11:43:41.733060   37330 system_pods.go:61] "etcd-kubernetes-upgrade-20220629113407-24356" [ba82cc3a-6c78-4919-bfb5-cb8d4fefa67e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0629 11:43:41.733068   37330 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220629113407-24356" [7db31939-d768-4354-9572-210eef9f72be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 11:43:41.733074   37330 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220629113407-24356" [aed2d16f-4ed3-4104-942a-14b3f2181577] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 11:43:41.733078   37330 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220629113407-24356" [612d2935-2d25-4599-a8a5-ee80773f6217] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0629 11:43:41.733085   37330 system_pods.go:61] "storage-provisioner" [c8299076-1453-4284-b020-f862159135c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0629 11:43:41.733089   37330 system_pods.go:74] duration metric: took 5.96386ms to wait for pod list to return data ...
	I0629 11:43:41.733096   37330 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:43:41.735602   37330 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:43:41.735619   37330 node_conditions.go:123] node cpu capacity is 6
	I0629 11:43:41.735628   37330 node_conditions.go:105] duration metric: took 2.527287ms to run NodePressure ...
	I0629 11:43:41.735639   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:43:41.847377   37330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 11:43:41.855074   37330 ops.go:34] apiserver oom_adj: -16
	I0629 11:43:41.855090   37330 kubeadm.go:630] restartCluster took 11.889210689s
	I0629 11:43:41.855098   37330 kubeadm.go:397] StartCluster complete in 11.98587292s
	I0629 11:43:41.855111   37330 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:43:41.855193   37330 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:43:41.855837   37330 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:43:41.856483   37330 kapi.go:59] client config for kubernetes-upgrade-20220629113407-24356: &rest.Config{Host:"https://127.0.0.1:57170", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kuber
netes-upgrade-20220629113407-24356/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fc060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 11:43:41.859105   37330 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220629113407-24356" rescaled to 1
	I0629 11:43:41.859139   37330 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:43:41.859156   37330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 11:43:41.859169   37330 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0629 11:43:41.859275   37330 config.go:178] Loaded profile config "kubernetes-upgrade-20220629113407-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:43:41.882137   37330 out.go:177] * Verifying Kubernetes components...
	I0629 11:43:41.882303   37330 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220629113407-24356"
	I0629 11:43:41.924202   37330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:43:41.924210   37330 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220629113407-24356"
	I0629 11:43:41.882302   37330 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220629113407-24356"
	W0629 11:43:41.924221   37330 addons.go:162] addon storage-provisioner should already be in state true
	I0629 11:43:41.924263   37330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220629113407-24356"
	I0629 11:43:41.924291   37330 host.go:66] Checking if "kubernetes-upgrade-20220629113407-24356" exists ...
	I0629 11:43:41.924672   37330 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Status}}
	I0629 11:43:41.924818   37330 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Status}}
	I0629 11:43:41.943592   37330 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0629 11:43:41.943658   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:42.016036   37330 kapi.go:59] client config for kubernetes-upgrade-20220629113407-24356: &rest.Config{Host:"https://127.0.0.1:57170", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubernetes-upgrade-20220629113407-24356/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kuber
netes-upgrade-20220629113407-24356/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fc060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 11:43:42.021962   37330 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220629113407-24356"
	W0629 11:43:42.037924   37330 addons.go:162] addon default-storageclass should already be in state true
	I0629 11:43:42.037887   37330 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 11:43:42.037941   37330 host.go:66] Checking if "kubernetes-upgrade-20220629113407-24356" exists ...
	I0629 11:43:42.038286   37330 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220629113407-24356 --format={{.State.Status}}
	I0629 11:43:42.059149   37330 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:43:42.059176   37330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 11:43:42.060084   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:42.066118   37330 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:43:42.066209   37330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:43:42.077053   37330 api_server.go:71] duration metric: took 217.877973ms to wait for apiserver process to appear ...
	I0629 11:43:42.077108   37330 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:43:42.077122   37330 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57170/healthz ...
	I0629 11:43:42.083889   37330 api_server.go:266] https://127.0.0.1:57170/healthz returned 200:
	ok
	I0629 11:43:42.086181   37330 api_server.go:140] control plane version: v1.24.2
	I0629 11:43:42.086198   37330 api_server.go:130] duration metric: took 9.081147ms to wait for apiserver health ...
	I0629 11:43:42.086207   37330 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:43:42.091475   37330 system_pods.go:59] 5 kube-system pods found
	I0629 11:43:42.091494   37330 system_pods.go:61] "etcd-kubernetes-upgrade-20220629113407-24356" [ba82cc3a-6c78-4919-bfb5-cb8d4fefa67e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0629 11:43:42.091507   37330 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220629113407-24356" [7db31939-d768-4354-9572-210eef9f72be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 11:43:42.091527   37330 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220629113407-24356" [aed2d16f-4ed3-4104-942a-14b3f2181577] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 11:43:42.091533   37330 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220629113407-24356" [612d2935-2d25-4599-a8a5-ee80773f6217] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0629 11:43:42.091544   37330 system_pods.go:61] "storage-provisioner" [c8299076-1453-4284-b020-f862159135c4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0629 11:43:42.091548   37330 system_pods.go:74] duration metric: took 5.335462ms to wait for pod list to return data ...
	I0629 11:43:42.091554   37330 kubeadm.go:572] duration metric: took 232.395718ms to wait for : map[apiserver:true system_pods:true] ...
	I0629 11:43:42.091563   37330 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:43:42.094932   37330 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:43:42.094946   37330 node_conditions.go:123] node cpu capacity is 6
	I0629 11:43:42.094958   37330 node_conditions.go:105] duration metric: took 3.386129ms to run NodePressure ...
	I0629 11:43:42.094966   37330 start.go:213] waiting for startup goroutines ...
	I0629 11:43:42.141346   37330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57166 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:43:42.143149   37330 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 11:43:42.143160   37330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 11:43:42.143221   37330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220629113407-24356
	I0629 11:43:42.217487   37330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57166 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/kubernetes-upgrade-20220629113407-24356/id_rsa Username:docker}
	I0629 11:43:42.236621   37330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:43:42.313806   37330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 11:43:42.878437   37330 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0629 11:43:42.898298   37330 addons.go:414] enableAddons completed in 1.039079551s
	I0629 11:43:42.949449   37330 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 11:43:42.970466   37330 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220629113407-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:38:25 UTC, end at Wed 2022-06-29 18:43:44 UTC. --
	Jun 29 18:43:17 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:17.070053508Z" level=info msg="ignoring event" container=c0376af7bd5a8ba4891c2967307596df528da1bf315c8fde6fd6448efd11d5fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:17 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:17.080514992Z" level=info msg="ignoring event" container=e8329c7d7a06cf5f75982a9e3b05b9a5e5a4b0b968ed3d2fd3026f27860334fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:26 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:26.960040491Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=247a2c61bd6124c86e19990d86f5d8088b109ffd4867a186c4b4854a350d5dc4
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.046920685Z" level=info msg="ignoring event" container=247a2c61bd6124c86e19990d86f5d8088b109ffd4867a186c4b4854a350d5dc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.207009900Z" level=info msg="Removing stale sandbox 1656e2a334ad3d34b8a6f282a8c735b3957226954c91a99e6a1c9aaa10dd8a06 (80529cda72c9ca3b61fd16679a1ae39d7f7a06444814128decbfde99bbe14765)"
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.208339261Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 33d20f19f3e1147828e65ce52a61125502d016a4c65bed218c94352905d1b034 11b358843f60a7ca5aa08b5f29e7841e4c999cd735f3cc67c5745ecffa6aed7c], retrying...."
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.298503136Z" level=info msg="Removing stale sandbox 1c5b4829365062aca677d88d5e82597e28daf7484b43447838b8aaea22849b6c (2d5d4f691ef8372e770d663de6784c02438fca95cbca160490f0502ec2be8c1b)"
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.299739593Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 33d20f19f3e1147828e65ce52a61125502d016a4c65bed218c94352905d1b034 5759a167c95fc0e3a1835b21d428e1a4311f07bdb73ce663d5959b8749926757], retrying...."
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.392913348Z" level=info msg="Removing stale sandbox 676a207dab8d5ebc9057c30ab0160e5cf0e7473cb9a5e37d765e921c0b650dbf (6d31573a2462b13a1ab2a6430240a022c34b301164f7dfb1ae86ab93fcf1b825)"
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.393916273Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 33d20f19f3e1147828e65ce52a61125502d016a4c65bed218c94352905d1b034 0215eded64e87308108de8eca166a95bb5a9bf8a6c2debc1b3fb374f316f06e2], retrying...."
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.418339909Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.454973321Z" level=info msg="Loading containers: done."
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.465051937Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.465119620Z" level=info msg="Daemon has completed initialization"
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 systemd[1]: Started Docker Application Container Engine.
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.492264767Z" level=info msg="API listen on [::]:2376"
	Jun 29 18:43:27 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:27.494885483Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 29 18:43:33 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:33.935924885Z" level=info msg="ignoring event" container=a01d5918eecd94a5f0e3f9da8f8f3214d0120335f3ba7cbdac27a1599f01601b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:33 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:33.937258192Z" level=info msg="ignoring event" container=b95c72f5637f9bb9775c70b6e314aff37715265b0fccc45b9d87f1567004c16c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:33 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:33.938803995Z" level=info msg="ignoring event" container=032101b0795cabd0f0c886d1e00bd1624fa26e6062d1bf0a01c3fb8369b36790 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:33 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:33.947763153Z" level=info msg="ignoring event" container=7bb8b2e8b233861e9d7e94f2605d6127ed91274e5076b237b53449ffdb802e2b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:33 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:33.949708204Z" level=info msg="ignoring event" container=a31dbbf272290add5a1bef977edb74b8e339dcc06fd71ae06a22a0b932ed63ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:33 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:33.954570244Z" level=info msg="ignoring event" container=c5dad740d5cd3cc6e6217e1167dd08c81519aef6d6a22e0ea0f67e934f2b5293 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:34 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:34.867425145Z" level=info msg="ignoring event" container=ee77c48cb748f3a3367e4d852bd69c967801424a75bc12508987cbb1d87312b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:43:34 kubernetes-upgrade-20220629113407-24356 dockerd[10288]: time="2022-06-29T18:43:34.953293463Z" level=info msg="ignoring event" container=9488c5c010e9fbafe836a72e04a0ffd7e31b99cd299cd0e748b4739684badff7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	934bab7fe07b3       aebe758cef4cd       7 seconds ago       Running             etcd                      3                   96de1b707b6b2
	97eefc145153e       34cdf99b1bb3b       8 seconds ago       Running             kube-controller-manager   3                   e20eba08f2101
	34b66cac4b418       d3377ffb7177c       8 seconds ago       Running             kube-apiserver            3                   7f4f132a98a03
	50193f3b15dfc       5d725196c1f47       8 seconds ago       Running             kube-scheduler            2                   f51d1118f1233
	032101b0795ca       34cdf99b1bb3b       16 seconds ago      Exited              kube-controller-manager   2                   b95c72f5637f9
	a31dbbf272290       aebe758cef4cd       16 seconds ago      Exited              etcd                      2                   7bb8b2e8b2338
	ee77c48cb748f       5d725196c1f47       17 seconds ago      Exited              kube-scheduler            1                   c5dad740d5cd3
	9488c5c010e9f       d3377ffb7177c       17 seconds ago      Exited              kube-apiserver            2                   a01d5918eecd9
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220629113407-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220629113407-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=kubernetes-upgrade-20220629113407-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T11_43_00_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 18:42:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220629113407-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 18:43:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 18:43:40 +0000   Wed, 29 Jun 2022 18:42:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 18:43:40 +0000   Wed, 29 Jun 2022 18:42:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 18:43:40 +0000   Wed, 29 Jun 2022 18:42:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 18:43:40 +0000   Wed, 29 Jun 2022 18:43:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-20220629113407-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                f7745b17-f641-413b-b8d1-79b211799116
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-20220629113407-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         44s
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220629113407-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220629113407-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220629113407-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 44s   kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  44s   kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  44s   kubelet  Node kubernetes-upgrade-20220629113407-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s   kubelet  Node kubernetes-upgrade-20220629113407-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s   kubelet  Node kubernetes-upgrade-20220629113407-24356 status is now: NodeHasSufficientPID
	  Normal  NodeReady                44s   kubelet  Node kubernetes-upgrade-20220629113407-24356 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [934bab7fe07b] <==
	* {"level":"info","ts":"2022-06-29T18:43:37.552Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-29T18:43:37.552Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-29T18:43:37.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2022-06-29T18:43:37.553Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-06-29T18:43:37.553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:43:37.553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:43:37.556Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T18:43:37.556Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T18:43:37.556Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T18:43:37.556Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T18:43:37.556Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T18:43:38.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 4"}
	{"level":"info","ts":"2022-06-29T18:43:38.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 4"}
	{"level":"info","ts":"2022-06-29T18:43:38.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-06-29T18:43:38.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 5"}
	{"level":"info","ts":"2022-06-29T18:43:38.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2022-06-29T18:43:38.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 5"}
	{"level":"info","ts":"2022-06-29T18:43:38.545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 5"}
	{"level":"info","ts":"2022-06-29T18:43:38.546Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220629113407-24356 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T18:43:38.546Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:43:38.546Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:43:38.546Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T18:43:38.546Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T18:43:38.547Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T18:43:38.548Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	
	* 
	* ==> etcd [a31dbbf27229] <==
	* {"level":"info","ts":"2022-06-29T18:43:28.551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-06-29T18:43:28.551Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:43:28.551Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220629113407-24356 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:43:30.240Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:43:30.241Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-06-29T18:43:30.241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T18:43:30.242Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T18:43:30.242Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T18:43:33.888Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T18:43:33.888Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-20220629113407-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2022/06/29 18:43:33 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 18:43:33 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T18:43:33.895Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2022-06-29T18:43:33.896Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T18:43:33.897Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-06-29T18:43:33.898Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-20220629113407-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:43:45 up 51 min,  0 users,  load average: 2.54, 1.41, 1.14
	Linux kubernetes-upgrade-20220629113407-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [34b66cac4b41] <==
	* I0629 18:43:40.225641       1 controller.go:85] Starting OpenAPI controller
	I0629 18:43:40.225932       1 controller.go:85] Starting OpenAPI V3 controller
	I0629 18:43:40.226000       1 naming_controller.go:291] Starting NamingConditionController
	I0629 18:43:40.226031       1 establishing_controller.go:76] Starting EstablishingController
	I0629 18:43:40.226150       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0629 18:43:40.226210       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0629 18:43:40.226369       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0629 18:43:40.226338       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0629 18:43:40.227045       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0629 18:43:40.263425       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0629 18:43:40.306493       1 cache.go:39] Caches are synced for autoregister controller
	I0629 18:43:40.306550       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0629 18:43:40.309740       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0629 18:43:40.320557       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0629 18:43:40.322833       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0629 18:43:40.323252       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0629 18:43:40.329340       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 18:43:40.352981       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 18:43:41.001320       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0629 18:43:41.222571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0629 18:43:41.809559       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 18:43:41.816088       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 18:43:41.837264       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 18:43:41.847781       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0629 18:43:41.852118       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [9488c5c010e9] <==
	* W0629 18:43:33.892835       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.892849       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.892884       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.892906       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893114       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893139       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893155       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893191       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893286       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893384       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893411       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893437       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893486       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893140       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893568       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893591       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893615       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893636       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893654       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893717       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893742       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893818       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893839       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893923       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:43:33.893956       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [032101b0795c] <==
	* I0629 18:43:29.785572       1 serving.go:348] Generated self-signed cert in-memory
	I0629 18:43:30.099391       1 controllermanager.go:180] Version: v1.24.2
	I0629 18:43:30.099433       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:43:30.100422       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0629 18:43:30.100498       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 18:43:30.100560       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0629 18:43:30.100712       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-controller-manager [97eefc145153] <==
	* W0629 18:43:42.334856       1 controllermanager.go:571] Skipping "service"
	I0629 18:43:42.345444       1 controllermanager.go:593] Started "pvc-protection"
	I0629 18:43:42.345565       1 pvc_protection_controller.go:103] "Starting PVC protection controller"
	I0629 18:43:42.345572       1 shared_informer.go:255] Waiting for caches to sync for PVC protection
	I0629 18:43:42.365492       1 controllermanager.go:593] Started "statefulset"
	I0629 18:43:42.365660       1 stateful_set.go:147] Starting stateful set controller
	I0629 18:43:42.365671       1 shared_informer.go:255] Waiting for caches to sync for stateful set
	I0629 18:43:42.368850       1 controllermanager.go:593] Started "cronjob"
	I0629 18:43:42.369075       1 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
	I0629 18:43:42.369102       1 shared_informer.go:255] Waiting for caches to sync for cronjob
	I0629 18:43:42.426400       1 shared_informer.go:262] Caches are synced for tokens
	I0629 18:43:42.428974       1 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-serving"
	I0629 18:43:42.429016       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0629 18:43:42.429044       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0629 18:43:42.429148       1 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-client"
	I0629 18:43:42.429187       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0629 18:43:42.429242       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0629 18:43:42.429423       1 certificate_controller.go:119] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0629 18:43:42.429502       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0629 18:43:42.429520       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0629 18:43:42.429921       1 controllermanager.go:593] Started "csrsigning"
	I0629 18:43:42.430080       1 certificate_controller.go:119] Starting certificate controller "csrsigning-legacy-unknown"
	I0629 18:43:42.430086       1 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0629 18:43:42.430099       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0629 18:43:42.433499       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [50193f3b15df] <==
	* I0629 18:43:37.264559       1 serving.go:348] Generated self-signed cert in-memory
	I0629 18:43:40.247462       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 18:43:40.247495       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:43:40.250554       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0629 18:43:40.250573       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 18:43:40.250587       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0629 18:43:40.250600       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0629 18:43:40.250608       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0629 18:43:40.250565       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 18:43:40.251624       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 18:43:40.251652       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 18:43:40.351295       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0629 18:43:40.351529       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0629 18:43:40.351915       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ee77c48cb748] <==
	* I0629 18:43:29.338263       1 serving.go:348] Generated self-signed cert in-memory
	W0629 18:43:32.167221       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 18:43:32.167240       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 18:43:32.167247       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 18:43:32.167252       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 18:43:32.234839       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 18:43:32.234876       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:43:32.236182       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 18:43:32.236218       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 18:43:32.236239       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 18:43:32.236254       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 18:43:32.336773       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 18:43:33.880562       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0629 18:43:33.880871       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0629 18:43:33.881443       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:38:25 UTC, end at Wed 2022-06-29 18:43:46 UTC. --
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.144206   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.245300   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.346478   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.446863   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.547158   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.647904   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.748463   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.849130   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:38 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:38.949835   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.049946   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.150858   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.251855   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.352286   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.453352   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.553996   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.654899   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.756023   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.857046   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:39 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:39.958101   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:40 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:40.059108   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:40 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: E0629 18:43:40.160100   11828 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220629113407-24356\" not found"
	Jun 29 18:43:40 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: I0629 18:43:40.325754   11828 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220629113407-24356"
	Jun 29 18:43:40 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: I0629 18:43:40.325851   11828 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220629113407-24356"
	Jun 29 18:43:41 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: I0629 18:43:41.008536   11828 apiserver.go:52] "Watching apiserver"
	Jun 29 18:43:41 kubernetes-upgrade-20220629113407-24356 kubelet[11828]: I0629 18:43:41.064758   11828 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220629113407-24356 -n kubernetes-upgrade-20220629113407-24356

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220629113407-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220629113407-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.547760475s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220629113407-24356 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220629113407-24356 describe pod storage-provisioner: exit status 1 (44.993342ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220629113407-24356 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220629113407-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220629113407-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220629113407-24356: (2.791367244s)
--- FAIL: TestKubernetesUpgrade (583.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (49.83s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2012348852.exe start -p missing-upgrade-20220629113317-24356 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2012348852.exe start -p missing-upgrade-20220629113317-24356 --memory=2200 --driver=docker : exit status 78 (35.549063951s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220629113317-24356] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220629113317-24356
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220629113317-24356" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.47 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 31.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.73 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 186.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 230.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 274.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 445.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 466.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 532.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:33:34.587685453 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220629113317-24356" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:33:51.600684380 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2012348852.exe start -p missing-upgrade-20220629113317-24356 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2012348852.exe start -p missing-upgrade-20220629113317-24356 --memory=2200 --driver=docker : exit status 70 (4.088187198s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220629113317-24356] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220629113317-24356
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220629113317-24356" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2012348852.exe start -p missing-upgrade-20220629113317-24356 --memory=2200 --driver=docker 
E0629 11:34:01.572028   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2012348852.exe start -p missing-upgrade-20220629113317-24356 --memory=2200 --driver=docker : exit status 70 (4.265302899s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220629113317-24356] minikube v1.9.1 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220629113317-24356
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220629113317-24356" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-06-29 11:34:04.341131 -0700 PDT m=+2510.769533215
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220629113317-24356
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220629113317-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "545f429b585d9a31a8e05e53edf8a4b8ef1896fb9ad61c49f433aae5fdce06be",
	        "Created": "2022-06-29T18:33:42.78697131Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 143077,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:33:43.023566748Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/545f429b585d9a31a8e05e53edf8a4b8ef1896fb9ad61c49f433aae5fdce06be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/545f429b585d9a31a8e05e53edf8a4b8ef1896fb9ad61c49f433aae5fdce06be/hostname",
	        "HostsPath": "/var/lib/docker/containers/545f429b585d9a31a8e05e53edf8a4b8ef1896fb9ad61c49f433aae5fdce06be/hosts",
	        "LogPath": "/var/lib/docker/containers/545f429b585d9a31a8e05e53edf8a4b8ef1896fb9ad61c49f433aae5fdce06be/545f429b585d9a31a8e05e53edf8a4b8ef1896fb9ad61c49f433aae5fdce06be-json.log",
	        "Name": "/missing-upgrade-20220629113317-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220629113317-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1a3673f0504167f7e2db283696f650185267765203e4e79cf695a420dd267b97-init/diff:/var/lib/docker/overlay2/8b8b79709b808eaa27a04e2ec296f1b2d21c5d25614b9d1347d1fd8285409cef/diff:/var/lib/docker/overlay2/7574f2f1bbb9d21a17ced2d509fbd098e1d8b2fb202e936dd5f1c0be8d30e813/diff:/var/lib/docker/overlay2/5029e661ba0bdf2f7295c0f7b33739da7b0ff62c1d9a87125e26cfac57c158e5/diff:/var/lib/docker/overlay2/eeaea74acabb962979a44d0d4f74715548948a7c291f4e8234095cd17b24f658/diff:/var/lib/docker/overlay2/e32cfafd4170cab3fe8b3ebbdae424666050b7a451ce0b3d793e0c8fe4d36180/diff:/var/lib/docker/overlay2/96a607706312a1042389375c84fdfb79339f36409afc5c119af55288d423b9a1/diff:/var/lib/docker/overlay2/cc80edf1fa40a1935a9ca67b8fd864978912d0ad09469d13c62261d83f4fff4a/diff:/var/lib/docker/overlay2/3441df5b815fa8635ca545ade8febbed1e2b1a9efe0a226cdb1c735bd0ea955e/diff:/var/lib/docker/overlay2/018b402027d28b2174d00d507daaf1145a05d8d61476db538fb07a2727212ac9/diff:/var/lib/docker/overlay2/056157
fb82ca1cc502427bbb658c3194c224632045154515cae8817675d79c29/diff:/var/lib/docker/overlay2/262548fcd077bf710edff1d9d1397f49654d654564525d61becc910a047cb35f/diff:/var/lib/docker/overlay2/fbe9d134fa113f2f913d2b646478f35fd967983667130f21a4ac49fc3eb3a61c/diff:/var/lib/docker/overlay2/cafd9c31263a2dd59718bf33194ee108ff2cb04ebe88da0f3b3075c86eecb290/diff:/var/lib/docker/overlay2/5a3fc86875a53ae2276ef1730f3b687652c07186573ae7089e84af1a2fd1da5e/diff:/var/lib/docker/overlay2/78d1206897017a1ee2983b8dc9747b6ffd1a73fa6fa5628f14b96793d4ffed51/diff:/var/lib/docker/overlay2/cd737964a0abf017c8a3dd052b56c31fbe465e55076b34484140c2492eab424e/diff:/var/lib/docker/overlay2/8b02f7e5ffdacccb5e40e789266be8c31d6c9005abbbe17242a230ebd7308799/diff:/var/lib/docker/overlay2/c168f283b555c193d448bd26f0733e8742770578e8ef350338634c663fdec6a8/diff:/var/lib/docker/overlay2/520ceca20125bf29f608fb18d9dbba7adaafad3e241e87064ec5856c27f4c271/diff:/var/lib/docker/overlay2/2e333694e543acf6961736ff91d5c670ed92071da339d43fcc9bbd9d28e6d369/diff:/var/lib/d
ocker/overlay2/a5a64984b612987ad9eb98efcddacb5d12fede3ff92d8324ffb45875d996df9a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a3673f0504167f7e2db283696f650185267765203e4e79cf695a420dd267b97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a3673f0504167f7e2db283696f650185267765203e4e79cf695a420dd267b97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a3673f0504167f7e2db283696f650185267765203e4e79cf695a420dd267b97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220629113317-24356",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220629113317-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220629113317-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220629113317-24356",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220629113317-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "222c4eb820cbdb8f05f63eac537a3ef2be777471fe6bf681070a37abb0b958a7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56221"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56222"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56223"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/222c4eb820cb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "8809d6e27de94f5bbefa8350fecedef91bbaadcced52aa841e78404110e24bc2",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "9c9c31cb50892651628ac3f665ecb74b34b04b1d52a900a1fe279edf900c294c",
	                    "EndpointID": "8809d6e27de94f5bbefa8350fecedef91bbaadcced52aa841e78404110e24bc2",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220629113317-24356 -n missing-upgrade-20220629113317-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220629113317-24356 -n missing-upgrade-20220629113317-24356: exit status 6 (431.243646ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:34:04.833613   35013 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220629113317-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220629113317-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220629113317-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220629113317-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220629113317-24356: (2.447981641s)
--- FAIL: TestMissingContainerUpgrade (49.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2715484711.exe start -p stopped-upgrade-20220629113518-24356 --memory=2200 --vm-driver=docker 
E0629 11:35:46.259296   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2715484711.exe start -p stopped-upgrade-20220629113518-24356 --memory=2200 --vm-driver=docker : exit status 70 (35.5343102s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220629113518-24356] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1475962751
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:35:36.187902347 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220629113518-24356" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:35:52.738901303 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220629113518-24356", then "minikube start -p stopped-upgrade-20220629113518-24356 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 103.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 126.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 256.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 277.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 337.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 445.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 467.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:35:52.738901303 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2715484711.exe start -p stopped-upgrade-20220629113518-24356 --memory=2200 --vm-driver=docker 
E0629 11:35:58.499371   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2715484711.exe start -p stopped-upgrade-20220629113518-24356 --memory=2200 --vm-driver=docker : exit status 70 (4.81973278s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220629113518-24356] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1693661632
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220629113518-24356" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2715484711.exe start -p stopped-upgrade-20220629113518-24356 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2715484711.exe start -p stopped-upgrade-20220629113518-24356 --memory=2200 --vm-driver=docker : exit status 70 (4.853033289s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220629113518-24356] minikube v1.9.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2899387968
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220629113518-24356" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (46.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (62.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220629113612-24356 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220629113612-24356 --output=json --layout=cluster: exit status 2 (16.104319098s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220629113612-24356","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220629113612-24356","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220629113612-24356
helpers_test.go:235: (dbg) docker inspect pause-20220629113612-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8697bdc981fdecfdeebfef35b9284bc4285f9a68ad1b7e5756067f679fea5dfd",
	        "Created": "2022-06-29T18:36:19.637917358Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 153430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:36:19.938935287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/8697bdc981fdecfdeebfef35b9284bc4285f9a68ad1b7e5756067f679fea5dfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8697bdc981fdecfdeebfef35b9284bc4285f9a68ad1b7e5756067f679fea5dfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8697bdc981fdecfdeebfef35b9284bc4285f9a68ad1b7e5756067f679fea5dfd/hosts",
	        "LogPath": "/var/lib/docker/containers/8697bdc981fdecfdeebfef35b9284bc4285f9a68ad1b7e5756067f679fea5dfd/8697bdc981fdecfdeebfef35b9284bc4285f9a68ad1b7e5756067f679fea5dfd-json.log",
	        "Name": "/pause-20220629113612-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220629113612-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220629113612-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4607a7888def9cb66dd3b96ddf6afe4243c0b3c86a340b795808bfbac016a772-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4607a7888def9cb66dd3b96ddf6afe4243c0b3c86a340b795808bfbac016a772/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4607a7888def9cb66dd3b96ddf6afe4243c0b3c86a340b795808bfbac016a772/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4607a7888def9cb66dd3b96ddf6afe4243c0b3c86a340b795808bfbac016a772/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220629113612-24356",
	                "Source": "/var/lib/docker/volumes/pause-20220629113612-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220629113612-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220629113612-24356",
	                "name.minikube.sigs.k8s.io": "pause-20220629113612-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c78cabcc3aa02cc9218c5ee7b01f8622de92aca24ab830df3b9356f9a0e4dd72",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56924"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56925"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56926"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56927"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56928"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c78cabcc3aa0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220629113612-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8697bdc981fd",
	                        "pause-20220629113612-24356"
	                    ],
	                    "NetworkID": "40e5e05ad8cd3779ebd2f4557cfeb7997336db9caed6e5a92a5e72bd77682c86",
	                    "EndpointID": "01dc057e61e964fba588d60b4da1446122705c4180795af64815b49c29de1889",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220629113612-24356 -n pause-20220629113612-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220629113612-24356 -n pause-20220629113612-24356: exit status 2 (16.099530405s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220629113612-24356 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220629113612-24356 logs -n 25: (13.749449784s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:30 PDT | 29 Jun 22 11:30 PDT |
	|         | force-systemd-env-20220629113018-24356  |          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5    |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:30 PDT | 29 Jun 22 11:30 PDT |
	|         | offline-docker-20220629112950-24356     |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:30 PDT | 29 Jun 22 11:31 PDT |
	|         | force-systemd-flag-20220629113041-24356 |          |         |         |                     |                     |
	|         | --memory=2048 --force-systemd           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker  |          |         |         |                     |                     |
	| ssh     | force-systemd-env-20220629113018-24356  | minikube | jenkins | v1.26.0 | 29 Jun 22 11:30 PDT | 29 Jun 22 11:30 PDT |
	|         | ssh docker info --format                |          |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:30 PDT | 29 Jun 22 11:30 PDT |
	|         | force-systemd-env-20220629113018-24356  |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:30 PDT | 29 Jun 22 11:31 PDT |
	|         | docker-flags-20220629113054-24356       |          |         |         |                     |                     |
	|         | --cache-images=false                    |          |         |         |                     |                     |
	|         | --memory=2048                           |          |         |         |                     |                     |
	|         | --install-addons=false                  |          |         |         |                     |                     |
	|         | --wait=false --docker-env=FOO=BAR       |          |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                    |          |         |         |                     |                     |
	|         | --docker-opt=debug                      |          |         |         |                     |                     |
	|         | --docker-opt=icc=true                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=5                  |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220629113041-24356 | minikube | jenkins | v1.26.0 | 29 Jun 22 11:31 PDT | 29 Jun 22 11:31 PDT |
	|         | ssh docker info --format                |          |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:31 PDT | 29 Jun 22 11:31 PDT |
	|         | force-systemd-flag-20220629113041-24356 |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:31 PDT | 29 Jun 22 11:31 PDT |
	|         | cert-expiration-20220629113118-24356    |          |         |         |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| ssh     | docker-flags-20220629113054-24356       | minikube | jenkins | v1.26.0 | 29 Jun 22 11:31 PDT | 29 Jun 22 11:31 PDT |
	|         | ssh sudo systemctl show docker          |          |         |         |                     |                     |
	|         | --property=Environment --no-pager       |          |         |         |                     |                     |
	| ssh     | docker-flags-20220629113054-24356       | minikube | jenkins | v1.26.0 | 29 Jun 22 11:31 PDT | 29 Jun 22 11:31 PDT |
	|         | ssh sudo systemctl show docker          |          |         |         |                     |                     |
	|         | --property=ExecStart --no-pager         |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:31 PDT | 29 Jun 22 11:31 PDT |
	|         | docker-flags-20220629113054-24356       |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:31 PDT | 29 Jun 22 11:32 PDT |
	|         | cert-options-20220629113128-24356       |          |         |         |                     |                     |
	|         | --memory=2048                           |          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |          |         |         |                     |                     |
	|         | --apiserver-names=localhost             |          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com        |          |         |         |                     |                     |
	|         | --apiserver-port=8555                   |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	|         | --apiserver-name=localhost              |          |         |         |                     |                     |
	| ssh     | cert-options-20220629113128-24356       | minikube | jenkins | v1.26.0 | 29 Jun 22 11:32 PDT | 29 Jun 22 11:32 PDT |
	|         | ssh openssl x509 -text -noout -in       |          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |          |         |         |                     |                     |
	| ssh     | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:32 PDT | 29 Jun 22 11:32 PDT |
	|         | cert-options-20220629113128-24356       |          |         |         |                     |                     |
	|         | -- sudo cat                             |          |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:32 PDT | 29 Jun 22 11:32 PDT |
	|         | cert-options-20220629113128-24356       |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:33 PDT | 29 Jun 22 11:33 PDT |
	|         | running-upgrade-20220629113205-24356    |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:34 PDT | 29 Jun 22 11:34 PDT |
	|         | missing-upgrade-20220629113317-24356    |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:34 PDT |                     |
	|         | kubernetes-upgrade-20220629113407-24356 |          |         |         |                     |                     |
	|         | --memory=2200                           |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |          |         |         |                     |                     |
	| start   | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:34 PDT | 29 Jun 22 11:35 PDT |
	|         | cert-expiration-20220629113118-24356    |          |         |         |                     |                     |
	|         | --memory=2048                           |          |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:35 PDT | 29 Jun 22 11:35 PDT |
	|         | cert-expiration-20220629113118-24356    |          |         |         |                     |                     |
	| delete  | -p                                      | minikube | jenkins | v1.26.0 | 29 Jun 22 11:36 PDT | 29 Jun 22 11:36 PDT |
	|         | stopped-upgrade-20220629113518-24356    |          |         |         |                     |                     |
	| start   | -p pause-20220629113612-24356           | minikube | jenkins | v1.26.0 | 29 Jun 22 11:36 PDT | 29 Jun 22 11:36 PDT |
	|         | --memory=2048                           |          |         |         |                     |                     |
	|         | --install-addons=false                  |          |         |         |                     |                     |
	|         | --wait=all --driver=docker              |          |         |         |                     |                     |
	| start   | -p pause-20220629113612-24356           | minikube | jenkins | v1.26.0 | 29 Jun 22 11:36 PDT | 29 Jun 22 11:37 PDT |
	|         | --alsologtostderr -v=1                  |          |         |         |                     |                     |
	|         | --driver=docker                         |          |         |         |                     |                     |
	| pause   | -p pause-20220629113612-24356           | minikube | jenkins | v1.26.0 | 29 Jun 22 11:37 PDT | 29 Jun 22 11:37 PDT |
	|         | --alsologtostderr -v=5                  |          |         |         |                     |                     |
	|---------|-----------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 11:36:57
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 11:36:57.638949   35836 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:36:57.639146   35836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:36:57.639151   35836 out.go:309] Setting ErrFile to fd 2...
	I0629 11:36:57.639155   35836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:36:57.639538   35836 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:36:57.639808   35836 out.go:303] Setting JSON to false
	I0629 11:36:57.654898   35836 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9385,"bootTime":1656518432,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:36:57.654981   35836 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:36:57.677029   35836 out.go:177] * [pause-20220629113612-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:36:57.699011   35836 notify.go:193] Checking for updates...
	I0629 11:36:57.720895   35836 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:36:57.741744   35836 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:36:57.783805   35836 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:36:57.804974   35836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:36:57.826084   35836 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:36:57.852861   35836 config.go:178] Loaded profile config "pause-20220629113612-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:36:57.853633   35836 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:36:57.924821   35836 docker.go:137] docker version: linux-20.10.16
	I0629 11:36:57.924955   35836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:36:58.051010   35836 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-29 18:36:57.986424647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:36:58.093840   35836 out.go:177] * Using the docker driver based on existing profile
	I0629 11:36:58.114924   35836 start.go:284] selected driver: docker
	I0629 11:36:58.114947   35836 start.go:808] validating driver "docker" against &{Name:pause-20220629113612-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220629113612-24356 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:36:58.115081   35836 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:36:58.115283   35836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:36:58.239688   35836 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:56 SystemTime:2022-06-29 18:36:58.177031968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:36:58.241760   35836 cni.go:95] Creating CNI manager for ""
	I0629 11:36:58.241780   35836 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:36:58.241793   35836 start_flags.go:310] config:
	{Name:pause-20220629113612-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220629113612-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:36:58.285496   35836 out.go:177] * Starting control plane node pause-20220629113612-24356 in cluster pause-20220629113612-24356
	I0629 11:36:58.306738   35836 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:36:58.328231   35836 out.go:177] * Pulling base image ...
	I0629 11:36:58.370574   35836 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:36:58.370607   35836 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:36:58.370655   35836 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 11:36:58.370685   35836 cache.go:57] Caching tarball of preloaded images
	I0629 11:36:58.370889   35836 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:36:58.370908   35836 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 11:36:58.371901   35836 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/config.json ...
	I0629 11:36:58.436465   35836 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:36:58.436487   35836 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:36:58.436502   35836 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:36:58.436553   35836 start.go:352] acquiring machines lock for pause-20220629113612-24356: {Name:mkb67966c63b0864fb79e928cab50b2a6145e9b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:36:58.436629   35836 start.go:356] acquired machines lock for "pause-20220629113612-24356" in 56.514µs
	I0629 11:36:58.436649   35836 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:36:58.436656   35836 fix.go:55] fixHost starting: 
	I0629 11:36:58.436881   35836 cli_runner.go:164] Run: docker container inspect pause-20220629113612-24356 --format={{.State.Status}}
	I0629 11:36:58.507679   35836 fix.go:103] recreateIfNeeded on pause-20220629113612-24356: state=Running err=<nil>
	W0629 11:36:58.507714   35836 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:36:58.529475   35836 out.go:177] * Updating the running docker "pause-20220629113612-24356" container ...
	I0629 11:36:58.571395   35836 machine.go:88] provisioning docker machine ...
	I0629 11:36:58.571459   35836 ubuntu.go:169] provisioning hostname "pause-20220629113612-24356"
	I0629 11:36:58.571579   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:36:58.642999   35836 main.go:134] libmachine: Using SSH client type: native
	I0629 11:36:58.643220   35836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56924 <nil> <nil>}
	I0629 11:36:58.643241   35836 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220629113612-24356 && echo "pause-20220629113612-24356" | sudo tee /etc/hostname
	I0629 11:36:58.770575   35836 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220629113612-24356
	
	I0629 11:36:58.770657   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:36:58.843575   35836 main.go:134] libmachine: Using SSH client type: native
	I0629 11:36:58.843726   35836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56924 <nil> <nil>}
	I0629 11:36:58.843740   35836 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220629113612-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220629113612-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220629113612-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:36:58.960620   35836 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:36:58.960638   35836 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:36:58.960656   35836 ubuntu.go:177] setting up certificates
	I0629 11:36:58.960667   35836 provision.go:83] configureAuth start
	I0629 11:36:58.960731   35836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220629113612-24356
	I0629 11:36:59.031337   35836 provision.go:138] copyHostCerts
	I0629 11:36:59.031416   35836 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:36:59.031430   35836 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:36:59.031532   35836 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:36:59.031735   35836 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:36:59.031743   35836 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:36:59.031800   35836 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:36:59.031928   35836 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:36:59.031934   35836 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:36:59.031985   35836 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:36:59.032132   35836 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.pause-20220629113612-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220629113612-24356]
	I0629 11:36:59.167724   35836 provision.go:172] copyRemoteCerts
	I0629 11:36:59.167785   35836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:36:59.167825   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:36:59.239285   35836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56924 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/pause-20220629113612-24356/id_rsa Username:docker}
	I0629 11:36:59.326018   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:36:59.342621   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0629 11:36:59.359802   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 11:36:59.376651   35836 provision.go:86] duration metric: configureAuth took 415.962602ms
	I0629 11:36:59.376666   35836 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:36:59.376805   35836 config.go:178] Loaded profile config "pause-20220629113612-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:36:59.376865   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:36:59.449881   35836 main.go:134] libmachine: Using SSH client type: native
	I0629 11:36:59.450048   35836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56924 <nil> <nil>}
	I0629 11:36:59.450059   35836 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:36:59.570259   35836 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:36:59.570271   35836 ubuntu.go:71] root file system type: overlay
	I0629 11:36:59.570410   35836 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:36:59.570481   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:36:59.642448   35836 main.go:134] libmachine: Using SSH client type: native
	I0629 11:36:59.642617   35836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56924 <nil> <nil>}
	I0629 11:36:59.642667   35836 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:36:59.770112   35836 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:36:59.770210   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:36:59.841762   35836 main.go:134] libmachine: Using SSH client type: native
	I0629 11:36:59.841934   35836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 56924 <nil> <nil>}
	I0629 11:36:59.841948   35836 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:36:59.963896   35836 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:36:59.963918   35836 machine.go:91] provisioned docker machine in 1.392466664s
	I0629 11:36:59.963928   35836 start.go:306] post-start starting for "pause-20220629113612-24356" (driver="docker")
	I0629 11:36:59.963935   35836 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:36:59.963999   35836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:36:59.964076   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:00.035158   35836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56924 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/pause-20220629113612-24356/id_rsa Username:docker}
	I0629 11:37:00.122847   35836 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:37:00.126710   35836 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:37:00.126726   35836 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:37:00.126739   35836 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:37:00.126743   35836 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:37:00.126752   35836 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:37:00.126859   35836 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:37:00.126997   35836 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:37:00.127156   35836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:37:00.135155   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:37:00.153848   35836 start.go:309] post-start completed in 189.905338ms
	I0629 11:37:00.153926   35836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:37:00.153977   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:00.226737   35836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56924 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/pause-20220629113612-24356/id_rsa Username:docker}
	I0629 11:37:00.310802   35836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:37:00.315139   35836 fix.go:57] fixHost completed within 1.87843699s
	I0629 11:37:00.315154   35836 start.go:81] releasing machines lock for "pause-20220629113612-24356", held for 1.878469286s
	I0629 11:37:00.315228   35836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220629113612-24356
	I0629 11:37:00.386515   35836 ssh_runner.go:195] Run: systemctl --version
	I0629 11:37:00.386522   35836 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:37:00.386571   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:00.386588   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:00.463394   35836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56924 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/pause-20220629113612-24356/id_rsa Username:docker}
	I0629 11:37:00.465087   35836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56924 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/pause-20220629113612-24356/id_rsa Username:docker}
	I0629 11:37:01.031336   35836 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:37:01.041435   35836 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:37:01.041494   35836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:37:01.053049   35836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:37:01.066577   35836 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:37:01.162880   35836 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:37:01.250562   35836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:37:01.344816   35836 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:37:17.183657   35836 ssh_runner.go:235] Completed: sudo systemctl restart docker: (15.838444959s)
	I0629 11:37:17.183747   35836 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 11:37:17.382208   35836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:37:17.488472   35836 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 11:37:17.499954   35836 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 11:37:17.500045   35836 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 11:37:17.504538   35836 start.go:468] Will wait 60s for crictl version
	I0629 11:37:17.504604   35836 ssh_runner.go:195] Run: sudo crictl version
	I0629 11:37:17.588488   35836 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 11:37:17.588579   35836 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:37:17.634469   35836 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:37:17.750598   35836 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 11:37:17.750736   35836 cli_runner.go:164] Run: docker exec -t pause-20220629113612-24356 dig +short host.docker.internal
	I0629 11:37:17.904087   35836 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:37:17.904188   35836 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:37:17.910304   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:17.988694   35836 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:37:17.988759   35836 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:37:18.027814   35836 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 11:37:18.027832   35836 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:37:18.028029   35836 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:37:18.111019   35836 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 11:37:18.111042   35836 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:37:18.111121   35836 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:37:18.321559   35836 cni.go:95] Creating CNI manager for ""
	I0629 11:37:18.321580   35836 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:37:18.321621   35836 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:37:18.321643   35836 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220629113612-24356 NodeName:pause-20220629113612-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:37:18.321786   35836 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-20220629113612-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:37:18.321911   35836 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220629113612-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:pause-20220629113612-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:37:18.321986   35836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 11:37:18.393332   35836 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:37:18.393427   35836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:37:18.405794   35836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0629 11:37:18.487224   35836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:37:18.514538   35836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0629 11:37:18.597823   35836 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:37:18.604881   35836 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356 for IP: 192.168.67.2
	I0629 11:37:18.605101   35836 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:37:18.605152   35836 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:37:18.605253   35836 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/client.key
	I0629 11:37:18.605323   35836 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/apiserver.key.c7fa3a9e
	I0629 11:37:18.605404   35836 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/proxy-client.key
	I0629 11:37:18.605647   35836 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:37:18.605722   35836 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:37:18.605751   35836 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:37:18.605793   35836 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:37:18.605825   35836 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:37:18.605862   35836 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:37:18.605938   35836 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:37:18.606570   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:37:18.688080   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 11:37:18.709513   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:37:18.733159   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:37:18.832912   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:37:18.854371   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:37:18.893870   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:37:18.923360   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:37:18.993034   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:37:19.022750   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:37:19.101479   35836 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:37:19.121564   35836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:37:19.140849   35836 ssh_runner.go:195] Run: openssl version
	I0629 11:37:19.186683   35836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:37:19.195527   35836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:37:19.199989   35836 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:37:19.200041   35836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:37:19.205501   35836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:37:19.213590   35836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:37:19.222533   35836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:37:19.227558   35836 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:37:19.227616   35836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:37:19.233484   35836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:37:19.241253   35836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:37:19.249797   35836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:37:19.281458   35836 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:37:19.281521   35836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:37:19.287299   35836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:37:19.295726   35836 kubeadm.go:395] StartCluster: {Name:pause-20220629113612-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220629113612-24356 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:37:19.295855   35836 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:37:19.327978   35836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:37:19.336978   35836 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:37:19.336993   35836 kubeadm.go:626] restartCluster start
	I0629 11:37:19.337042   35836 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:37:19.344605   35836 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:37:19.344667   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:19.423233   35836 kubeconfig.go:92] found "pause-20220629113612-24356" server: "https://127.0.0.1:56928"
	I0629 11:37:19.423699   35836 kapi.go:59] client config for pause-20220629113612-24356: &rest.Config{Host:"https://127.0.0.1:56928", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fc060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 11:37:19.424255   35836 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:37:19.432537   35836 api_server.go:165] Checking apiserver status ...
	I0629 11:37:19.432588   35836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:37:19.442200   35836 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4508/cgroup
	W0629 11:37:19.450500   35836 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:37:19.450514   35836 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56928/healthz ...
	I0629 11:37:22.403245   35836 api_server.go:266] https://127.0.0.1:56928/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:37:22.403278   35836 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:56928/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:37:22.666556   35836 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56928/healthz ...
	I0629 11:37:22.672160   35836 api_server.go:266] https://127.0.0.1:56928/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:37:22.672181   35836 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:56928/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:37:23.053676   35836 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56928/healthz ...
	I0629 11:37:23.059814   35836 api_server.go:266] https://127.0.0.1:56928/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:37:23.059836   35836 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:56928/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:37:23.482780   35836 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56928/healthz ...
	I0629 11:37:23.488193   35836 api_server.go:266] https://127.0.0.1:56928/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:37:23.488211   35836 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:56928/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:37:23.963438   35836 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56928/healthz ...
	I0629 11:37:23.972273   35836 api_server.go:266] https://127.0.0.1:56928/healthz returned 200:
	ok
	I0629 11:37:23.984167   35836 system_pods.go:86] 6 kube-system pods found
	I0629 11:37:23.984182   35836 system_pods.go:89] "coredns-6d4b75cb6d-dtwvg" [f87a54fd-f7e4-46c7-8e70-6ad22dcea249] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 11:37:23.984192   35836 system_pods.go:89] "etcd-pause-20220629113612-24356" [527d44e9-58c0-43a0-ae66-5dd96acb3618] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0629 11:37:23.984200   35836 system_pods.go:89] "kube-apiserver-pause-20220629113612-24356" [4a5b7e97-412c-4b83-9354-ba7ca0f44ad1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 11:37:23.984209   35836 system_pods.go:89] "kube-controller-manager-pause-20220629113612-24356" [569a9367-ba81-4170-a4a6-2e65fc0c1850] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 11:37:23.984215   35836 system_pods.go:89] "kube-proxy-w56w4" [0e7bf470-1fa7-466d-9b6d-0df5cba3d249] Running
	I0629 11:37:23.984222   35836 system_pods.go:89] "kube-scheduler-pause-20220629113612-24356" [18b4ba2b-2c02-4c53-8a76-99daa222e5f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0629 11:37:23.985386   35836 api_server.go:140] control plane version: v1.24.2
	I0629 11:37:23.985398   35836 kubeadm.go:620] The running cluster does not require reconfiguration: 127.0.0.1
	I0629 11:37:23.985403   35836 kubeadm.go:674] Taking a shortcut, as the cluster seems to be properly configured
	I0629 11:37:23.985410   35836 kubeadm.go:630] restartCluster took 4.648303449s
	I0629 11:37:23.985415   35836 kubeadm.go:397] StartCluster complete in 4.689586724s
	I0629 11:37:23.985425   35836 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:37:23.985493   35836 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:37:23.985908   35836 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:37:23.986686   35836 kapi.go:59] client config for pause-20220629113612-24356: &rest.Config{Host:"https://127.0.0.1:56928", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fc060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 11:37:23.989096   35836 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220629113612-24356" rescaled to 1
	I0629 11:37:23.989132   35836 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:37:23.989139   35836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 11:37:23.989187   35836 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0629 11:37:24.032494   35836 out.go:177] * Verifying Kubernetes components...
	I0629 11:37:23.989316   35836 config.go:178] Loaded profile config "pause-20220629113612-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:37:24.032562   35836 addons.go:65] Setting default-storageclass=true in profile "pause-20220629113612-24356"
	I0629 11:37:24.032562   35836 addons.go:65] Setting storage-provisioner=true in profile "pause-20220629113612-24356"
	I0629 11:37:24.041722   35836 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0629 11:37:24.053330   35836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220629113612-24356"
	I0629 11:37:24.053334   35836 addons.go:153] Setting addon storage-provisioner=true in "pause-20220629113612-24356"
	W0629 11:37:24.053350   35836 addons.go:162] addon storage-provisioner should already be in state true
	I0629 11:37:24.053350   35836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:37:24.053395   35836 host.go:66] Checking if "pause-20220629113612-24356" exists ...
	I0629 11:37:24.053694   35836 cli_runner.go:164] Run: docker container inspect pause-20220629113612-24356 --format={{.State.Status}}
	I0629 11:37:24.053845   35836 cli_runner.go:164] Run: docker container inspect pause-20220629113612-24356 --format={{.State.Status}}
	I0629 11:37:24.084637   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:24.138055   35836 kapi.go:59] client config for pause-20220629113612-24356: &rest.Config{Host:"https://127.0.0.1:56928", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/pause-20220629113612-24356/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fc060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0629 11:37:24.142089   35836 addons.go:153] Setting addon default-storageclass=true in "pause-20220629113612-24356"
	I0629 11:37:24.161514   35836 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0629 11:37:24.161533   35836 addons.go:162] addon default-storageclass should already be in state true
	I0629 11:37:24.161556   35836 host.go:66] Checking if "pause-20220629113612-24356" exists ...
	I0629 11:37:24.182699   35836 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:37:24.182713   35836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 11:37:24.182790   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:24.183265   35836 cli_runner.go:164] Run: docker container inspect pause-20220629113612-24356 --format={{.State.Status}}
	I0629 11:37:24.199016   35836 node_ready.go:35] waiting up to 6m0s for node "pause-20220629113612-24356" to be "Ready" ...
	I0629 11:37:24.203414   35836 node_ready.go:49] node "pause-20220629113612-24356" has status "Ready":"True"
	I0629 11:37:24.203425   35836 node_ready.go:38] duration metric: took 4.363835ms waiting for node "pause-20220629113612-24356" to be "Ready" ...
	I0629 11:37:24.203437   35836 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:37:24.209218   35836 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dtwvg" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:24.265898   35836 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 11:37:24.265911   35836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 11:37:24.265971   35836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220629113612-24356
	I0629 11:37:24.268547   35836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56924 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/pause-20220629113612-24356/id_rsa Username:docker}
	I0629 11:37:24.338995   35836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56924 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/pause-20220629113612-24356/id_rsa Username:docker}
	I0629 11:37:24.361769   35836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:37:24.436966   35836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 11:37:25.013278   35836 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0629 11:37:25.071004   35836 addons.go:414] enableAddons completed in 1.081763848s
	I0629 11:37:25.222920   35836 pod_ready.go:92] pod "coredns-6d4b75cb6d-dtwvg" in "kube-system" namespace has status "Ready":"True"
	I0629 11:37:25.222933   35836 pod_ready.go:81] duration metric: took 1.013677257s waiting for pod "coredns-6d4b75cb6d-dtwvg" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:25.222939   35836 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:27.234744   35836 pod_ready.go:102] pod "etcd-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:37:29.734825   35836 pod_ready.go:102] pod "etcd-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:37:31.735839   35836 pod_ready.go:102] pod "etcd-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:37:33.736848   35836 pod_ready.go:102] pod "etcd-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:37:36.235244   35836 pod_ready.go:102] pod "etcd-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:37:38.735307   35836 pod_ready.go:92] pod "etcd-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:37:38.735320   35836 pod_ready.go:81] duration metric: took 13.512059222s waiting for pod "etcd-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.735326   35836 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.739715   35836 pod_ready.go:92] pod "kube-apiserver-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:37:38.739723   35836 pod_ready.go:81] duration metric: took 4.392844ms waiting for pod "kube-apiserver-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.739729   35836 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.743840   35836 pod_ready.go:92] pod "kube-controller-manager-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:37:38.743848   35836 pod_ready.go:81] duration metric: took 4.114102ms waiting for pod "kube-controller-manager-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.743853   35836 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w56w4" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.749629   35836 pod_ready.go:92] pod "kube-proxy-w56w4" in "kube-system" namespace has status "Ready":"True"
	I0629 11:37:38.749637   35836 pod_ready.go:81] duration metric: took 5.779327ms waiting for pod "kube-proxy-w56w4" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.749643   35836 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.753922   35836 pod_ready.go:92] pod "kube-scheduler-pause-20220629113612-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:37:38.753930   35836 pod_ready.go:81] duration metric: took 4.281969ms waiting for pod "kube-scheduler-pause-20220629113612-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:37:38.753934   35836 pod_ready.go:38] duration metric: took 14.550141842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:37:38.753951   35836 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:37:38.753997   35836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:37:38.763586   35836 api_server.go:71] duration metric: took 14.774089727s to wait for apiserver process to appear ...
	I0629 11:37:38.763598   35836 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:37:38.763605   35836 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56928/healthz ...
	I0629 11:37:38.768464   35836 api_server.go:266] https://127.0.0.1:56928/healthz returned 200:
	ok
	I0629 11:37:38.769411   35836 api_server.go:140] control plane version: v1.24.2
	I0629 11:37:38.769419   35836 api_server.go:130] duration metric: took 5.817321ms to wait for apiserver health ...
	I0629 11:37:38.769424   35836 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:37:38.937537   35836 system_pods.go:59] 7 kube-system pods found
	I0629 11:37:38.937550   35836 system_pods.go:61] "coredns-6d4b75cb6d-dtwvg" [f87a54fd-f7e4-46c7-8e70-6ad22dcea249] Running
	I0629 11:37:38.937554   35836 system_pods.go:61] "etcd-pause-20220629113612-24356" [527d44e9-58c0-43a0-ae66-5dd96acb3618] Running
	I0629 11:37:38.937558   35836 system_pods.go:61] "kube-apiserver-pause-20220629113612-24356" [4a5b7e97-412c-4b83-9354-ba7ca0f44ad1] Running
	I0629 11:37:38.937563   35836 system_pods.go:61] "kube-controller-manager-pause-20220629113612-24356" [569a9367-ba81-4170-a4a6-2e65fc0c1850] Running
	I0629 11:37:38.937567   35836 system_pods.go:61] "kube-proxy-w56w4" [0e7bf470-1fa7-466d-9b6d-0df5cba3d249] Running
	I0629 11:37:38.937572   35836 system_pods.go:61] "kube-scheduler-pause-20220629113612-24356" [18b4ba2b-2c02-4c53-8a76-99daa222e5f4] Running
	I0629 11:37:38.937576   35836 system_pods.go:61] "storage-provisioner" [5d2c5182-a6aa-46c5-bb63-b0a5c44c4750] Running
	I0629 11:37:38.937580   35836 system_pods.go:74] duration metric: took 168.147682ms to wait for pod list to return data ...
	I0629 11:37:38.937584   35836 default_sa.go:34] waiting for default service account to be created ...
	I0629 11:37:39.133314   35836 default_sa.go:45] found service account: "default"
	I0629 11:37:39.133325   35836 default_sa.go:55] duration metric: took 195.731912ms for default service account to be created ...
	I0629 11:37:39.133331   35836 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 11:37:39.337859   35836 system_pods.go:86] 7 kube-system pods found
	I0629 11:37:39.337873   35836 system_pods.go:89] "coredns-6d4b75cb6d-dtwvg" [f87a54fd-f7e4-46c7-8e70-6ad22dcea249] Running
	I0629 11:37:39.337877   35836 system_pods.go:89] "etcd-pause-20220629113612-24356" [527d44e9-58c0-43a0-ae66-5dd96acb3618] Running
	I0629 11:37:39.337881   35836 system_pods.go:89] "kube-apiserver-pause-20220629113612-24356" [4a5b7e97-412c-4b83-9354-ba7ca0f44ad1] Running
	I0629 11:37:39.337885   35836 system_pods.go:89] "kube-controller-manager-pause-20220629113612-24356" [569a9367-ba81-4170-a4a6-2e65fc0c1850] Running
	I0629 11:37:39.337888   35836 system_pods.go:89] "kube-proxy-w56w4" [0e7bf470-1fa7-466d-9b6d-0df5cba3d249] Running
	I0629 11:37:39.337892   35836 system_pods.go:89] "kube-scheduler-pause-20220629113612-24356" [18b4ba2b-2c02-4c53-8a76-99daa222e5f4] Running
	I0629 11:37:39.337896   35836 system_pods.go:89] "storage-provisioner" [5d2c5182-a6aa-46c5-bb63-b0a5c44c4750] Running
	I0629 11:37:39.337913   35836 system_pods.go:126] duration metric: took 204.569786ms to wait for k8s-apps to be running ...
	I0629 11:37:39.337921   35836 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 11:37:39.337969   35836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:37:39.347968   35836 system_svc.go:56] duration metric: took 10.043705ms WaitForService to wait for kubelet.
	I0629 11:37:39.347979   35836 kubeadm.go:572] duration metric: took 15.358470901s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 11:37:39.347990   35836 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:37:39.535961   35836 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:37:39.535982   35836 node_conditions.go:123] node cpu capacity is 6
	I0629 11:37:39.535993   35836 node_conditions.go:105] duration metric: took 187.994877ms to run NodePressure ...
	I0629 11:37:39.536019   35836 start.go:213] waiting for startup goroutines ...
	I0629 11:37:39.565460   35836 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 11:37:39.586706   35836 out.go:177] * Done! kubectl is now configured to use "pause-20220629113612-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:36:20 UTC, end at Wed 2022-06-29 18:38:13 UTC. --
	Jun 29 18:37:06 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:06.705438243Z" level=info msg="ignoring event" container=e13716aa79e2d453922b2248fe542a004121a64b0d0f70beae2792fe1e2f54ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:06 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:06.706203791Z" level=info msg="ignoring event" container=b31bd32d815babc7610b6f1f67e58e82f415f796e9e09d33951cc00443891d54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:06 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:06.708825812Z" level=info msg="ignoring event" container=605d9f40eac67bd43f51f45db52615967ff89ca47ae9d415166ae4bddc5d7ebb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:06 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:06.721473344Z" level=info msg="ignoring event" container=9b2a4b4735cc8056b84a5142fa2eb11a2448db8d13e4fcf2a95e3520e145f949 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:06 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:06.725512360Z" level=info msg="ignoring event" container=4237cc8f75eda0bbcbe0da5800e8b2f8e9bd5e3aae26bf2b5876bf26c4351526 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:06 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:06.805037012Z" level=info msg="ignoring event" container=2d0d6a8e00fba5f3bd75bae544349313462f42c4ae5f630defdac38b0d50ebdf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.627727248Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=60025164ee98c0f7096506d2f5d8c83a9f86a8e7f57af2c6234d6a38f3b2795b
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.634754260Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1e79274154c8d43c1b1aed3a848e4a2085ca098f90917fd75ec71e6d6834a069
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.681278498Z" level=info msg="ignoring event" container=1e79274154c8d43c1b1aed3a848e4a2085ca098f90917fd75ec71e6d6834a069 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.687930548Z" level=info msg="ignoring event" container=60025164ee98c0f7096506d2f5d8c83a9f86a8e7f57af2c6234d6a38f3b2795b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.848341695Z" level=info msg="Removing stale sandbox 348f2f4777f390bed7c4402f6c00c30b9c04a2a8f5dbe4080ec2e03290deee22 (9b2a4b4735cc8056b84a5142fa2eb11a2448db8d13e4fcf2a95e3520e145f949)"
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.849732265Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint beecea88f6caa28676dd3d4c3da883c1126d00e3998c0f1f69abce03d355bdf3 7ca01b6094b4af65beb923ffb67fbd68750c38dcb1b4cc691113718145fe1009], retrying...."
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.935195581Z" level=info msg="Removing stale sandbox 35648f165e008156fadae72e44bdbe4c96f08a83fd486896fe93cbc52c85c1a3 (b31bd32d815babc7610b6f1f67e58e82f415f796e9e09d33951cc00443891d54)"
	Jun 29 18:37:16 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:16.936484207Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint beecea88f6caa28676dd3d4c3da883c1126d00e3998c0f1f69abce03d355bdf3 b6cc2c619f02e91f900aa68037b4ce1f9153d4ecad0a0bc5ad88d4e420599e22], retrying...."
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.022912500Z" level=info msg="Removing stale sandbox 7e63298b09ab58f847160b51cc6a6319930579c8e3b79105325bf48a08305e2f (605d9f40eac67bd43f51f45db52615967ff89ca47ae9d415166ae4bddc5d7ebb)"
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.024106400Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint beecea88f6caa28676dd3d4c3da883c1126d00e3998c0f1f69abce03d355bdf3 6d9f86dbf68f7a5926f2178560267f80e764c20abe231d7b6be5036e2580aa33], retrying...."
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.111169474Z" level=info msg="Removing stale sandbox 8b0dfd06f3e8754d3c50f0d1b69f0d8ae633918af45c331efb5e7169c7df38bf (4237cc8f75eda0bbcbe0da5800e8b2f8e9bd5e3aae26bf2b5876bf26c4351526)"
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.112427889Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint beecea88f6caa28676dd3d4c3da883c1126d00e3998c0f1f69abce03d355bdf3 0f903b30f2b7f635fd5d395cfdd3f3a834d0dffa1a5cac7d6cde831091b45f91], retrying...."
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.135512191Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.171475500Z" level=info msg="Loading containers: done."
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.179994013Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.180060306Z" level=info msg="Daemon has completed initialization"
	Jun 29 18:37:17 pause-20220629113612-24356 systemd[1]: Started Docker Application Container Engine.
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.203858363Z" level=info msg="API listen on [::]:2376"
	Jun 29 18:37:17 pause-20220629113612-24356 dockerd[3606]: time="2022-06-29T18:37:17.207667035Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	38434a77a4423       6e38f40d628db       48 seconds ago       Running             storage-provisioner       0                   c48015823be11
	6877bf20cbfc3       5d725196c1f47       55 seconds ago       Running             kube-scheduler            2                   f538655060d40
	7f9caff481572       aebe758cef4cd       55 seconds ago       Running             etcd                      2                   9ee7d19767567
	bde090996c9ef       a4ca41631cc7a       55 seconds ago       Running             coredns                   1                   26b87fd18856e
	7f012010cafc4       d3377ffb7177c       56 seconds ago       Running             kube-apiserver            2                   d12b7abeb58fe
	7ea2b938caa21       34cdf99b1bb3b       56 seconds ago       Running             kube-controller-manager   2                   7023ad3ea530e
	48d064a2e5295       a634548d10b03       56 seconds ago       Running             kube-proxy                1                   6cdc8d69fbcb1
	e13716aa79e2d       34cdf99b1bb3b       About a minute ago   Exited              kube-controller-manager   1                   4237cc8f75eda
	1e79274154c8d       5d725196c1f47       About a minute ago   Exited              kube-scheduler            1                   b31bd32d815ba
	60025164ee98c       d3377ffb7177c       About a minute ago   Exited              kube-apiserver            1                   9b2a4b4735cc8
	2d0d6a8e00fba       aebe758cef4cd       About a minute ago   Exited              etcd                      1                   605d9f40eac67
	042bc315b989a       a4ca41631cc7a       About a minute ago   Exited              coredns                   0                   ec4a169ea558c
	9e20583ca010f       a634548d10b03       About a minute ago   Exited              kube-proxy                0                   871f18bec90d3
	
	* 
	* ==> coredns [042bc315b989] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [bde090996c9e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001466] FS-Cache: O-key=[8] '1ef1ef0200000000'
	[  +0.001062] FS-Cache: N-cookie c=00000000020eabbd [p=000000005b56f100 fl=2 nc=0 na=1]
	[  +0.001749] FS-Cache: N-cookie d=00000000566e3fec n=000000008abc4e60
	[  +0.001461] FS-Cache: N-key=[8] '1ef1ef0200000000'
	[  +0.001954] FS-Cache: Duplicate cookie detected
	[  +0.001018] FS-Cache: O-cookie c=00000000a4d4b862 [p=000000005b56f100 fl=226 nc=0 na=1]
	[  +0.001782] FS-Cache: O-cookie d=00000000566e3fec n=00000000ef8790b8
	[  +0.001443] FS-Cache: O-key=[8] '1ef1ef0200000000'
	[  +0.001100] FS-Cache: N-cookie c=00000000020eabbd [p=000000005b56f100 fl=2 nc=0 na=1]
	[  +0.001752] FS-Cache: N-cookie d=00000000566e3fec n=00000000dce4b8c4
	[  +0.001440] FS-Cache: N-key=[8] '1ef1ef0200000000'
	[  +3.694641] FS-Cache: Duplicate cookie detected
	[  +0.001033] FS-Cache: O-cookie c=000000001386eeec [p=000000005b56f100 fl=226 nc=0 na=1]
	[  +0.001808] FS-Cache: O-cookie d=00000000566e3fec n=0000000011366032
	[  +0.001472] FS-Cache: O-key=[8] '1df1ef0200000000'
	[  +0.001160] FS-Cache: N-cookie c=00000000a378cc54 [p=000000005b56f100 fl=2 nc=0 na=1]
	[  +0.001860] FS-Cache: N-cookie d=00000000566e3fec n=00000000f51d2397
	[  +0.001432] FS-Cache: N-key=[8] '1df1ef0200000000'
	[  +0.452282] FS-Cache: Duplicate cookie detected
	[  +0.001019] FS-Cache: O-cookie c=00000000ace645c6 [p=000000005b56f100 fl=226 nc=0 na=1]
	[  +0.001773] FS-Cache: O-cookie d=00000000566e3fec n=0000000063e03313
	[  +0.001421] FS-Cache: O-key=[8] '2bf1ef0200000000'
	[  +0.001097] FS-Cache: N-cookie c=00000000f5dccdb2 [p=000000005b56f100 fl=2 nc=0 na=1]
	[  +0.001745] FS-Cache: N-cookie d=00000000566e3fec n=000000008efc7178
	[  +0.001422] FS-Cache: N-key=[8] '2bf1ef0200000000'
	
	* 
	* ==> etcd [2d0d6a8e00fb] <==
	* {"level":"info","ts":"2022-06-29T18:37:01.960Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T18:37:01.961Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T18:37:01.961Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T18:37:03.453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-29T18:37:03.453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-29T18:37:03.453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T18:37:03.453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-06-29T18:37:03.453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-06-29T18:37:03.453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-06-29T18:37:03.453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-06-29T18:37:03.454Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220629113612-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T18:37:03.454Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:37:03.454Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:37:03.454Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T18:37:03.455Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T18:37:03.456Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T18:37:03.456Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T18:37:06.641Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T18:37:06.641Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220629113612-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/06/29 18:37:06 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 18:37:06 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T18:37:06.711Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-06-29T18:37:06.712Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T18:37:06.713Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T18:37:06.713Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220629113612-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [7f9caff48157] <==
	* {"level":"info","ts":"2022-06-29T18:37:19.149Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-29T18:37:19.150Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-29T18:37:19.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T18:37:19.150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T18:37:19.150Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:37:19.150Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T18:37:19.150Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:37:19.151Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T18:37:19.151Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T18:37:19.151Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T18:37:19.151Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T18:37:20.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-06-29T18:37:20.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-06-29T18:37:20.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-06-29T18:37:20.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-06-29T18:37:20.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-06-29T18:37:20.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-06-29T18:37:20.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-06-29T18:37:20.747Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220629113612-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T18:37:20.747Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:37:20.747Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:37:20.747Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T18:37:20.748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T18:37:20.749Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T18:37:20.749Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:38:24 up 46 min,  0 users,  load average: 0.41, 1.14, 1.03
	Linux pause-20220629113612-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [60025164ee98] <==
	* W0629 18:37:15.998555       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.055835       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.060682       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.060686       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.081665       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.095541       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.096934       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.119692       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.199017       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.224570       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.264543       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.336502       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.368544       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.372533       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.387902       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.389472       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.408553       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.413396       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.454445       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.478679       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.494975       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.543591       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.597196       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.619818       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 18:37:16.633444       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [7f012010cafc] <==
	* I0629 18:37:22.395332       1 establishing_controller.go:76] Starting EstablishingController
	I0629 18:37:22.395371       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0629 18:37:22.395407       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0629 18:37:22.395415       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0629 18:37:22.397450       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0629 18:37:22.403188       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0629 18:37:22.403363       1 autoregister_controller.go:141] Starting autoregister controller
	I0629 18:37:22.403376       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0629 18:37:22.410178       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0629 18:37:22.410206       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0629 18:37:22.499650       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0629 18:37:22.499929       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0629 18:37:22.500115       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0629 18:37:22.503333       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0629 18:37:22.503570       1 cache.go:39] Caches are synced for autoregister controller
	I0629 18:37:22.507872       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0629 18:37:22.509440       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 18:37:22.510307       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0629 18:37:22.532627       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 18:37:23.184268       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0629 18:37:23.405093       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0629 18:37:24.956818       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 18:37:24.967985       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0629 18:37:24.972900       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0629 18:37:24.978374       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [7ea2b938caa2] <==
	* I0629 18:37:34.817676       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0629 18:37:34.817739       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0629 18:37:34.817741       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0629 18:37:34.818221       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0629 18:37:34.820479       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0629 18:37:34.822744       1 shared_informer.go:262] Caches are synced for ephemeral
	I0629 18:37:34.822781       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0629 18:37:34.826676       1 shared_informer.go:262] Caches are synced for crt configmap
	I0629 18:37:34.826800       1 shared_informer.go:262] Caches are synced for HPA
	I0629 18:37:34.829515       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0629 18:37:34.829608       1 shared_informer.go:262] Caches are synced for taint
	I0629 18:37:34.829682       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0629 18:37:34.829729       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0629 18:37:34.829775       1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220629113612-24356. Assuming now as a timestamp.
	I0629 18:37:34.829954       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0629 18:37:34.829864       1 event.go:294] "Event occurred" object="pause-20220629113612-24356" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220629113612-24356 event: Registered Node pause-20220629113612-24356 in Controller"
	I0629 18:37:34.956788       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 18:37:35.008140       1 shared_informer.go:262] Caches are synced for attach detach
	I0629 18:37:35.021349       1 shared_informer.go:262] Caches are synced for PV protection
	I0629 18:37:35.024462       1 shared_informer.go:262] Caches are synced for persistent volume
	I0629 18:37:35.025121       1 shared_informer.go:262] Caches are synced for expand
	I0629 18:37:35.047106       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 18:37:35.461851       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 18:37:35.480386       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 18:37:35.480473       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [e13716aa79e2] <==
	* I0629 18:37:04.731436       1 serving.go:348] Generated self-signed cert in-memory
	I0629 18:37:05.091387       1 controllermanager.go:180] Version: v1.24.2
	I0629 18:37:05.091425       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:37:05.092255       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0629 18:37:05.092319       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0629 18:37:05.092359       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0629 18:37:05.092367       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [48d064a2e529] <==
	* E0629 18:37:17.809002       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220629113612-24356": dial tcp 192.168.67.2:8443: connect: connection refused
	I0629 18:37:22.501617       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 18:37:22.502580       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 18:37:22.502641       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 18:37:22.525143       1 server_others.go:206] "Using iptables Proxier"
	I0629 18:37:22.525183       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 18:37:22.525190       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 18:37:22.525204       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 18:37:22.525227       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:37:22.525435       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:37:22.525649       1 server.go:661] "Version info" version="v1.24.2"
	I0629 18:37:22.525703       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:37:22.527671       1 config.go:317] "Starting service config controller"
	I0629 18:37:22.527697       1 config.go:226] "Starting endpoint slice config controller"
	I0629 18:37:22.527709       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 18:37:22.527708       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 18:37:22.527895       1 config.go:444] "Starting node config controller"
	I0629 18:37:22.527955       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 18:37:22.627952       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 18:37:22.628038       1 shared_informer.go:262] Caches are synced for node config
	I0629 18:37:22.628133       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [9e20583ca010] <==
	* I0629 18:36:55.096403       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 18:36:55.096465       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 18:36:55.096505       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 18:36:55.130339       1 server_others.go:206] "Using iptables Proxier"
	I0629 18:36:55.130384       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 18:36:55.130393       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 18:36:55.130402       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 18:36:55.130439       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:36:55.130635       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:36:55.131378       1 server.go:661] "Version info" version="v1.24.2"
	I0629 18:36:55.131406       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:36:55.131992       1 config.go:317] "Starting service config controller"
	I0629 18:36:55.132007       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 18:36:55.132045       1 config.go:444] "Starting node config controller"
	I0629 18:36:55.132053       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 18:36:55.132447       1 config.go:226] "Starting endpoint slice config controller"
	I0629 18:36:55.132453       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 18:36:55.233156       1 shared_informer.go:262] Caches are synced for node config
	I0629 18:36:55.233255       1 shared_informer.go:262] Caches are synced for service config
	I0629 18:36:55.233352       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [1e79274154c8] <==
	* I0629 18:37:03.369328       1 serving.go:348] Generated self-signed cert in-memory
	W0629 18:37:05.422886       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 18:37:05.422922       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 18:37:05.422930       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 18:37:05.422934       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 18:37:05.439452       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 18:37:05.439487       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:37:05.440462       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 18:37:05.441053       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 18:37:05.441081       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 18:37:05.441096       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 18:37:05.541491       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 18:37:06.630375       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0629 18:37:06.630542       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0629 18:37:06.631188       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [6877bf20cbfc] <==
	* I0629 18:37:19.585471       1 serving.go:348] Generated self-signed cert in-memory
	W0629 18:37:22.420794       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 18:37:22.420833       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 18:37:22.420841       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 18:37:22.420846       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 18:37:22.434803       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 18:37:22.434897       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:37:22.435851       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 18:37:22.436204       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 18:37:22.497943       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 18:37:22.436230       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 18:37:22.598680       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:36:20 UTC, end at Wed 2022-06-29 18:38:26 UTC. --
	Jun 29 18:37:15 pause-20220629113612-24356 kubelet[1927]: E0629 18:37:15.994320    1927 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20220629113612-24356?timeout=10s": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 29 18:37:16 pause-20220629113612-24356 kubelet[1927]: E0629 18:37:16.394821    1927 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20220629113612-24356?timeout=10s": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: E0629 18:37:17.195694    1927 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-20220629113612-24356?timeout=10s": dial tcp 192.168.67.2:8443: connect: connection refused
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.225706    1927 status_manager.go:664] "Failed to get status for pod" podUID=f87a54fd-f7e4-46c7-8e70-6ad22dcea249 pod="kube-system/coredns-6d4b75cb6d-dtwvg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dtwvg\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.238251    1927 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b31bd32d815babc7610b6f1f67e58e82f415f796e9e09d33951cc00443891d54"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.238296    1927 scope.go:110] "RemoveContainer" containerID="23dba87c48b83a93db74b6502446bae38026afbf406aa88ef2fba5dacc5eb0dd"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.303670    1927 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4237cc8f75eda0bbcbe0da5800e8b2f8e9bd5e3aae26bf2b5876bf26c4351526"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.304318    1927 status_manager.go:664] "Failed to get status for pod" podUID=230949ddcf78cf5441691f5f1c305046 pod="kube-system/kube-controller-manager-pause-20220629113612-24356" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-20220629113612-24356\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.310793    1927 scope.go:110] "RemoveContainer" containerID="1fe8d62e2acbd56f8fc2cb65344c98755ec2f7cc392fe45587b12ff092dac114"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.314380    1927 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="605d9f40eac67bd43f51f45db52615967ff89ca47ae9d415166ae4bddc5d7ebb"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.327577    1927 scope.go:110] "RemoveContainer" containerID="4363442c6392c8c2c630e96c4a19821e5dd1b0aad0e23ec7f5098c10397a1e2f"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.329956    1927 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="9b2a4b4735cc8056b84a5142fa2eb11a2448db8d13e4fcf2a95e3520e145f949"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.330481    1927 status_manager.go:664] "Failed to get status for pod" podUID=c436433f56e5eb1f797efc9b44e4fe2e pod="kube-system/kube-apiserver-pause-20220629113612-24356" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20220629113612-24356\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Jun 29 18:37:17 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:17.404057    1927 scope.go:110] "RemoveContainer" containerID="ad4c9ab0191630f94c4d03025e238bbac16154bc3ee5ce1227252ff438f40268"
	Jun 29 18:37:22 pause-20220629113612-24356 kubelet[1927]: W0629 18:37:22.406804    1927 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-20220629113612-24356" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-20220629113612-24356' and this object
	Jun 29 18:37:22 pause-20220629113612-24356 kubelet[1927]: E0629 18:37:22.407183    1927 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-20220629113612-24356" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-20220629113612-24356' and this object
	Jun 29 18:37:22 pause-20220629113612-24356 kubelet[1927]: E0629 18:37:22.406830    1927 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jun 29 18:37:24 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:24.991103    1927 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:37:25 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:25.056788    1927 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d2c5182-a6aa-46c5-bb63-b0a5c44c4750-tmp\") pod \"storage-provisioner\" (UID: \"5d2c5182-a6aa-46c5-bb63-b0a5c44c4750\") " pod="kube-system/storage-provisioner"
	Jun 29 18:37:25 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:25.056894    1927 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6klxk\" (UniqueName: \"kubernetes.io/projected/5d2c5182-a6aa-46c5-bb63-b0a5c44c4750-kube-api-access-6klxk\") pod \"storage-provisioner\" (UID: \"5d2c5182-a6aa-46c5-bb63-b0a5c44c4750\") " pod="kube-system/storage-provisioner"
	Jun 29 18:37:40 pause-20220629113612-24356 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jun 29 18:37:40 pause-20220629113612-24356 kubelet[1927]: I0629 18:37:40.191805    1927 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jun 29 18:37:40 pause-20220629113612-24356 systemd[1]: kubelet.service: Succeeded.
	Jun 29 18:37:40 pause-20220629113612-24356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 29 18:37:40 pause-20220629113612-24356 systemd[1]: kubelet.service: Consumed 1.802s CPU time.
	
	* 
	* ==> storage-provisioner [38434a77a442] <==
	* I0629 18:37:26.057939       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 18:37:26.066856       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 18:37:26.066889       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 18:37:26.075583       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 18:37:26.075697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220629113612-24356_09be0334-facb-462e-8d1c-80e2ac8be13f!
	I0629 18:37:26.075822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc70261a-11c6-4d9d-94f4-47bd164bc2fb", APIVersion:"v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220629113612-24356_09be0334-facb-462e-8d1c-80e2ac8be13f became leader
	I0629 18:37:26.176576       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220629113612-24356_09be0334-facb-462e-8d1c-80e2ac8be13f!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:38:23.976068   36013 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220629113612-24356 -n pause-20220629113612-24356
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220629113612-24356 -n pause-20220629113612-24356: exit status 2 (16.113102882s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220629113612-24356" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (62.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220629114717-24356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220629114717-24356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m10.080986678s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220629114717-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-20220629114717-24356 in cluster old-k8s-version-20220629114717-24356
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:47:17.644451   38502 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:47:17.682080   38502 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:47:17.682129   38502 out.go:309] Setting ErrFile to fd 2...
	I0629 11:47:17.682142   38502 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:47:17.682850   38502 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:47:17.683797   38502 out.go:303] Setting JSON to false
	I0629 11:47:17.699626   38502 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10005,"bootTime":1656518432,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:47:17.699720   38502 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:47:17.727405   38502 out.go:177] * [old-k8s-version-20220629114717-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:47:17.771594   38502 notify.go:193] Checking for updates...
	I0629 11:47:17.793244   38502 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:47:17.857382   38502 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:47:17.923145   38502 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:47:17.987324   38502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:47:18.051347   38502 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:47:18.094771   38502 config.go:178] Loaded profile config "kubenet-20220629112950-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:47:18.094850   38502 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:47:18.256730   38502 docker.go:137] docker version: linux-20.10.16
	I0629 11:47:18.256916   38502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:47:18.391205   38502 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:47:18.33996239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:47:18.436175   38502 out.go:177] * Using the docker driver based on user configuration
	I0629 11:47:18.457336   38502 start.go:284] selected driver: docker
	I0629 11:47:18.457372   38502 start.go:808] validating driver "docker" against <nil>
	I0629 11:47:18.457405   38502 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:47:18.460806   38502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:47:18.583301   38502 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:47:18.534695487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:47:18.583415   38502 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 11:47:18.583589   38502 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:47:18.605433   38502 out.go:177] * Using Docker Desktop driver with root privileges
	I0629 11:47:18.626039   38502 cni.go:95] Creating CNI manager for ""
	I0629 11:47:18.626072   38502 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:47:18.626098   38502 start_flags.go:310] config:
	{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:47:18.648089   38502 out.go:177] * Starting control plane node old-k8s-version-20220629114717-24356 in cluster old-k8s-version-20220629114717-24356
	I0629 11:47:18.690163   38502 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:47:18.727289   38502 out.go:177] * Pulling base image ...
	I0629 11:47:18.804419   38502 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:47:18.804438   38502 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:47:18.804496   38502 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 11:47:18.804515   38502 cache.go:57] Caching tarball of preloaded images
	I0629 11:47:18.804742   38502 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:47:18.804766   38502 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0629 11:47:18.805842   38502 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:47:18.806009   38502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json: {Name:mkdb4740e1b86af358bfc56945e6563d5dfe31af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:47:18.870786   38502 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:47:18.870815   38502 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:47:18.870825   38502 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:47:18.870878   38502 start.go:352] acquiring machines lock for old-k8s-version-20220629114717-24356: {Name:mkeaf278b11a6771761242ef819919656a0edee3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:47:18.871033   38502 start.go:356] acquired machines lock for "old-k8s-version-20220629114717-24356" in 142.857µs
	I0629 11:47:18.871063   38502 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:47:18.871152   38502 start.go:131] createHost starting for "" (driver="docker")
	I0629 11:47:18.913918   38502 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0629 11:47:18.914133   38502 start.go:165] libmachine.API.Create for "old-k8s-version-20220629114717-24356" (driver="docker")
	I0629 11:47:18.914162   38502 client.go:168] LocalClient.Create starting
	I0629 11:47:18.914254   38502 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem
	I0629 11:47:18.914325   38502 main.go:134] libmachine: Decoding PEM data...
	I0629 11:47:18.914340   38502 main.go:134] libmachine: Parsing certificate...
	I0629 11:47:18.914391   38502 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem
	I0629 11:47:18.914416   38502 main.go:134] libmachine: Decoding PEM data...
	I0629 11:47:18.914428   38502 main.go:134] libmachine: Parsing certificate...
	I0629 11:47:18.935332   38502 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220629114717-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0629 11:47:19.001052   38502 cli_runner.go:211] docker network inspect old-k8s-version-20220629114717-24356 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0629 11:47:19.001171   38502 network_create.go:272] running [docker network inspect old-k8s-version-20220629114717-24356] to gather additional debugging logs...
	I0629 11:47:19.001226   38502 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220629114717-24356
	W0629 11:47:19.065659   38502 cli_runner.go:211] docker network inspect old-k8s-version-20220629114717-24356 returned with exit code 1
	I0629 11:47:19.065693   38502 network_create.go:275] error running [docker network inspect old-k8s-version-20220629114717-24356]: docker network inspect old-k8s-version-20220629114717-24356: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220629114717-24356
	I0629 11:47:19.065712   38502 network_create.go:277] output of [docker network inspect old-k8s-version-20220629114717-24356]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220629114717-24356
	
	** /stderr **
	I0629 11:47:19.065811   38502 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0629 11:47:19.129416   38502 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000ac07b8] misses:0}
	I0629 11:47:19.129451   38502 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:47:19.129465   38502 network_create.go:115] attempt to create docker network old-k8s-version-20220629114717-24356 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0629 11:47:19.129541   38502 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 old-k8s-version-20220629114717-24356
	W0629 11:47:19.192526   38502 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 old-k8s-version-20220629114717-24356 returned with exit code 1
	W0629 11:47:19.192577   38502 network_create.go:107] failed to create docker network old-k8s-version-20220629114717-24356 192.168.49.0/24, will retry: subnet is taken
	I0629 11:47:19.192849   38502 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac07b8] amended:false}} dirty:map[] misses:0}
	I0629 11:47:19.192866   38502 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:47:19.193062   38502 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac07b8] amended:true}} dirty:map[192.168.49.0:0xc000ac07b8 192.168.58.0:0xc000ac0820] misses:0}
	I0629 11:47:19.193076   38502 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:47:19.193086   38502 network_create.go:115] attempt to create docker network old-k8s-version-20220629114717-24356 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0629 11:47:19.193141   38502 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 old-k8s-version-20220629114717-24356
	W0629 11:47:19.255484   38502 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 old-k8s-version-20220629114717-24356 returned with exit code 1
	W0629 11:47:19.255533   38502 network_create.go:107] failed to create docker network old-k8s-version-20220629114717-24356 192.168.58.0/24, will retry: subnet is taken
	I0629 11:47:19.255800   38502 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac07b8] amended:true}} dirty:map[192.168.49.0:0xc000ac07b8 192.168.58.0:0xc000ac0820] misses:1}
	I0629 11:47:19.255817   38502 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:47:19.256020   38502 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac07b8] amended:true}} dirty:map[192.168.49.0:0xc000ac07b8 192.168.58.0:0xc000ac0820 192.168.67.0:0xc00000e548] misses:1}
	I0629 11:47:19.256038   38502 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:47:19.256046   38502 network_create.go:115] attempt to create docker network old-k8s-version-20220629114717-24356 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0629 11:47:19.256105   38502 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 old-k8s-version-20220629114717-24356
	W0629 11:47:19.319422   38502 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 old-k8s-version-20220629114717-24356 returned with exit code 1
	W0629 11:47:19.319485   38502 network_create.go:107] failed to create docker network old-k8s-version-20220629114717-24356 192.168.67.0/24, will retry: subnet is taken
	I0629 11:47:19.319772   38502 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac07b8] amended:true}} dirty:map[192.168.49.0:0xc000ac07b8 192.168.58.0:0xc000ac0820 192.168.67.0:0xc00000e548] misses:2}
	I0629 11:47:19.319789   38502 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:47:19.319991   38502 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac07b8] amended:true}} dirty:map[192.168.49.0:0xc000ac07b8 192.168.58.0:0xc000ac0820 192.168.67.0:0xc00000e548 192.168.76.0:0xc00000e580] misses:2}
	I0629 11:47:19.320002   38502 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0629 11:47:19.320009   38502 network_create.go:115] attempt to create docker network old-k8s-version-20220629114717-24356 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0629 11:47:19.320066   38502 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 old-k8s-version-20220629114717-24356
	I0629 11:47:19.413673   38502 network_create.go:99] docker network old-k8s-version-20220629114717-24356 192.168.76.0/24 created
	I0629 11:47:19.413712   38502 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220629114717-24356" container
	I0629 11:47:19.413814   38502 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0629 11:47:19.481161   38502 cli_runner.go:164] Run: docker volume create old-k8s-version-20220629114717-24356 --label name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 --label created_by.minikube.sigs.k8s.io=true
	I0629 11:47:19.545128   38502 oci.go:103] Successfully created a docker volume old-k8s-version-20220629114717-24356
	I0629 11:47:19.545237   38502 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220629114717-24356-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 --entrypoint /usr/bin/test -v old-k8s-version-20220629114717-24356:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -d /var/lib
	I0629 11:47:20.008108   38502 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220629114717-24356
	I0629 11:47:20.008151   38502 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:47:20.008164   38502 kic.go:179] Starting extracting preloaded images to volume ...
	I0629 11:47:20.008265   38502 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220629114717-24356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir
	I0629 11:47:24.446906   38502 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220629114717-24356:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e -I lz4 -xf /preloaded.tar -C /extractDir: (4.438424116s)
	I0629 11:47:24.446928   38502 kic.go:188] duration metric: took 4.438628 seconds to extract preloaded images to volume
	I0629 11:47:24.447043   38502 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0629 11:47:24.595228   38502 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220629114717-24356 --name old-k8s-version-20220629114717-24356 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220629114717-24356 --network old-k8s-version-20220629114717-24356 --ip 192.168.76.2 --volume old-k8s-version-20220629114717-24356:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e
	I0629 11:47:25.026322   38502 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Running}}
	I0629 11:47:25.110900   38502 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:47:25.201202   38502 cli_runner.go:164] Run: docker exec old-k8s-version-20220629114717-24356 stat /var/lib/dpkg/alternatives/iptables
	I0629 11:47:25.341564   38502 oci.go:144] the created container "old-k8s-version-20220629114717-24356" has a running status.
	I0629 11:47:25.341590   38502 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa...
	I0629 11:47:25.547452   38502 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0629 11:47:25.675053   38502 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:47:25.751542   38502 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0629 11:47:25.751560   38502 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220629114717-24356 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0629 11:47:25.879955   38502 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:47:25.953786   38502 machine.go:88] provisioning docker machine ...
	I0629 11:47:25.953846   38502 ubuntu.go:169] provisioning hostname "old-k8s-version-20220629114717-24356"
	I0629 11:47:25.953948   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:26.037283   38502 main.go:134] libmachine: Using SSH client type: native
	I0629 11:47:26.037492   38502 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 59835 <nil> <nil>}
	I0629 11:47:26.037507   38502 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220629114717-24356 && echo "old-k8s-version-20220629114717-24356" | sudo tee /etc/hostname
	I0629 11:47:26.168074   38502 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220629114717-24356
	
	I0629 11:47:26.168145   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:26.239757   38502 main.go:134] libmachine: Using SSH client type: native
	I0629 11:47:26.239908   38502 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 59835 <nil> <nil>}
	I0629 11:47:26.239931   38502 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220629114717-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220629114717-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220629114717-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:47:26.358681   38502 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:47:26.358700   38502 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:47:26.358722   38502 ubuntu.go:177] setting up certificates
	I0629 11:47:26.358731   38502 provision.go:83] configureAuth start
	I0629 11:47:26.358800   38502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:47:26.430202   38502 provision.go:138] copyHostCerts
	I0629 11:47:26.430288   38502 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:47:26.430301   38502 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:47:26.430393   38502 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:47:26.430580   38502 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:47:26.430588   38502 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:47:26.430645   38502 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:47:26.430783   38502 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:47:26.430790   38502 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:47:26.430845   38502 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:47:26.430955   38502 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220629114717-24356 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220629114717-24356]
	I0629 11:47:26.493180   38502 provision.go:172] copyRemoteCerts
	I0629 11:47:26.493230   38502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:47:26.493275   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:26.564289   38502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59835 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:47:26.650291   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:47:26.668572   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0629 11:47:26.686307   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 11:47:26.704202   38502 provision.go:86] duration metric: configureAuth took 345.449655ms
	I0629 11:47:26.704218   38502 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:47:26.704364   38502 config.go:178] Loaded profile config "old-k8s-version-20220629114717-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:47:26.704422   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:26.776215   38502 main.go:134] libmachine: Using SSH client type: native
	I0629 11:47:26.776382   38502 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 59835 <nil> <nil>}
	I0629 11:47:26.776398   38502 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:47:26.893563   38502 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:47:26.893575   38502 ubuntu.go:71] root file system type: overlay
	I0629 11:47:26.893709   38502 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:47:26.893780   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:26.968227   38502 main.go:134] libmachine: Using SSH client type: native
	I0629 11:47:26.968375   38502 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 59835 <nil> <nil>}
	I0629 11:47:26.968433   38502 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:47:27.096481   38502 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:47:27.096558   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:27.168876   38502 main.go:134] libmachine: Using SSH client type: native
	I0629 11:47:27.169051   38502 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 59835 <nil> <nil>}
	I0629 11:47:27.169065   38502 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:47:27.772083   38502 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-29 18:47:27.094284897 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0629 11:47:27.772109   38502 machine.go:91] provisioned docker machine in 1.818250241s
	I0629 11:47:27.772117   38502 client.go:171] LocalClient.Create took 8.857685319s
	I0629 11:47:27.772132   38502 start.go:173] duration metric: libmachine.API.Create for "old-k8s-version-20220629114717-24356" took 8.857733409s
	I0629 11:47:27.772140   38502 start.go:306] post-start starting for "old-k8s-version-20220629114717-24356" (driver="docker")
	I0629 11:47:27.772143   38502 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:47:27.772217   38502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:47:27.772264   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:27.843800   38502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59835 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:47:27.929805   38502 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:47:27.933519   38502 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:47:27.933537   38502 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:47:27.933545   38502 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:47:27.933550   38502 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:47:27.933562   38502 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:47:27.933679   38502 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:47:27.933823   38502 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:47:27.933994   38502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:47:27.940914   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:47:27.957560   38502 start.go:309] post-start completed in 185.406959ms
	I0629 11:47:27.958055   38502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:47:28.030418   38502 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:47:28.030830   38502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:47:28.030877   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:28.103874   38502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59835 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:47:28.186621   38502 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:47:28.191700   38502 start.go:134] duration metric: createHost completed in 9.320258172s
	I0629 11:47:28.191719   38502 start.go:81] releasing machines lock for "old-k8s-version-20220629114717-24356", held for 9.320397909s
	I0629 11:47:28.191805   38502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:47:28.263210   38502 ssh_runner.go:195] Run: systemctl --version
	I0629 11:47:28.263212   38502 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:47:28.263276   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:28.263283   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:28.341598   38502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59835 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:47:28.343395   38502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59835 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:47:28.911318   38502 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:47:28.921140   38502 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:47:28.921198   38502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:47:28.930271   38502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:47:28.943291   38502 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:47:29.012150   38502 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:47:29.078682   38502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:47:29.143209   38502 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:47:29.332491   38502 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:47:29.367292   38502 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:47:29.448861   38502 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0629 11:47:29.448993   38502 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220629114717-24356 dig +short host.docker.internal
	I0629 11:47:29.576308   38502 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:47:29.576459   38502 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:47:29.580807   38502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:47:29.590423   38502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:47:29.661360   38502 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:47:29.661476   38502 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:47:29.691319   38502 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:47:29.691335   38502 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:47:29.691399   38502 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:47:29.721198   38502 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:47:29.721213   38502 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:47:29.721289   38502 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:47:29.795688   38502 cni.go:95] Creating CNI manager for ""
	I0629 11:47:29.795699   38502 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:47:29.795711   38502 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:47:29.795723   38502 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220629114717-24356 NodeName:old-k8s-version-20220629114717-24356 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:47:29.795849   38502 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220629114717-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220629114717-24356
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:47:29.795957   38502 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220629114717-24356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:47:29.796023   38502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0629 11:47:29.804023   38502 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:47:29.804078   38502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:47:29.811181   38502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0629 11:47:29.824113   38502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:47:29.837376   38502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0629 11:47:29.851461   38502 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:47:29.855421   38502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:47:29.865239   38502 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356 for IP: 192.168.76.2
	I0629 11:47:29.865351   38502 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:47:29.865407   38502 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:47:29.865450   38502 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.key
	I0629 11:47:29.865463   38502 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.crt with IP's: []
	I0629 11:47:30.080789   38502 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.crt ...
	I0629 11:47:30.080805   38502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.crt: {Name:mk06d1b6fb5a6c2a0090db931089315503ccf9f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:47:30.081129   38502 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.key ...
	I0629 11:47:30.081137   38502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.key: {Name:mkdcda8673cee87eb276fe3bc7c5eb204ed6a41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:47:30.081339   38502 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key.31bdca25
	I0629 11:47:30.081355   38502 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0629 11:47:30.223931   38502 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt.31bdca25 ...
	I0629 11:47:30.223950   38502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt.31bdca25: {Name:mkdccf56d431fb5727dd8a19b2e6e287bd1fca79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:47:30.224241   38502 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key.31bdca25 ...
	I0629 11:47:30.224250   38502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key.31bdca25: {Name:mk6babcde6e507128042b395fcd53c1318bcb7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:47:30.224440   38502 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt
	I0629 11:47:30.224597   38502 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key
	I0629 11:47:30.224743   38502 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key
	I0629 11:47:30.224757   38502 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.crt with IP's: []
	I0629 11:47:30.341340   38502 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.crt ...
	I0629 11:47:30.341351   38502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.crt: {Name:mk0d3bf27d37a4ba638a5ee42bde9350dcf1986f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:47:30.341564   38502 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key ...
	I0629 11:47:30.341575   38502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key: {Name:mk4fd1ca42d409c187906d6faefb61818bb44a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:47:30.341952   38502 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:47:30.341990   38502 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:47:30.341999   38502 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:47:30.342030   38502 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:47:30.342059   38502 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:47:30.342091   38502 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:47:30.342152   38502 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:47:30.342647   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:47:30.360856   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 11:47:30.379208   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:47:30.396324   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:47:30.413111   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:47:30.430032   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:47:30.446648   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:47:30.463279   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:47:30.483220   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:47:30.500570   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:47:30.517177   38502 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:47:30.533842   38502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:47:30.546311   38502 ssh_runner.go:195] Run: openssl version
	I0629 11:47:30.551594   38502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:47:30.559230   38502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:47:30.563007   38502 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:47:30.563049   38502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:47:30.568124   38502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:47:30.575991   38502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:47:30.583548   38502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:47:30.587451   38502 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:47:30.587497   38502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:47:30.592772   38502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:47:30.600927   38502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:47:30.608346   38502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:47:30.612207   38502 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:47:30.612246   38502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:47:30.617431   38502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:47:30.627202   38502 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:47:30.627303   38502 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:47:30.655826   38502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:47:30.663670   38502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:47:30.671007   38502 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:47:30.671059   38502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:47:30.678333   38502 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:47:30.678365   38502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:47:31.426784   38502 out.go:204]   - Generating certificates and keys ...
	I0629 11:47:33.374598   38502 out.go:204]   - Booting up control plane ...
	W0629 11:49:28.315689   38502 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220629114717-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220629114717-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220629114717-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220629114717-24356 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:49:28.315724   38502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:49:28.737777   38502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:49:28.747473   38502 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:49:28.747520   38502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:49:28.755381   38502 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:49:28.755404   38502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:49:29.491757   38502 out.go:204]   - Generating certificates and keys ...
	I0629 11:49:30.143676   38502 out.go:204]   - Booting up control plane ...
	I0629 11:51:25.061000   38502 kubeadm.go:397] StartCluster complete in 3m54.426771912s
	I0629 11:51:25.061082   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:51:25.091524   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.091536   38502 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:51:25.091604   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:51:25.120619   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.120630   38502 logs.go:276] No container was found matching "etcd"
	I0629 11:51:25.120685   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:51:25.149716   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.149728   38502 logs.go:276] No container was found matching "coredns"
	I0629 11:51:25.149784   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:51:25.178932   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.178943   38502 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:51:25.179001   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:51:25.211386   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.211397   38502 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:51:25.211453   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:51:25.240364   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.240376   38502 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:51:25.240434   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:51:25.271627   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.271638   38502 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:51:25.271696   38502 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:51:25.301050   38502 logs.go:274] 0 containers: []
	W0629 11:51:25.301063   38502 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:51:25.301069   38502 logs.go:123] Gathering logs for kubelet ...
	I0629 11:51:25.301078   38502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:51:25.341847   38502 logs.go:123] Gathering logs for dmesg ...
	I0629 11:51:25.341866   38502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:51:25.353980   38502 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:51:25.353996   38502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:51:25.407485   38502 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:51:25.407496   38502 logs.go:123] Gathering logs for Docker ...
	I0629 11:51:25.407503   38502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:51:25.423225   38502 logs.go:123] Gathering logs for container status ...
	I0629 11:51:25.423238   38502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:51:27.480116   38502 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056803755s)
	W0629 11:51:27.480242   38502 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 11:51:27.480257   38502 out.go:239] * 
	* 
	W0629 11:51:27.480430   38502 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:51:27.480445   38502 out.go:239] * 
	* 
	W0629 11:51:27.480986   38502 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 11:51:27.511210   38502 out.go:177] 
	W0629 11:51:27.553790   38502 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 11:51:27.553952   38502 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 11:51:27.554027   38502 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 11:51:27.619179   38502 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220629114717-24356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220629114717-24356
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220629114717-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2",
	        "Created": "2022-06-29T18:47:24.686705454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:47:25.036356976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2-json.log",
	        "Name": "/old-k8s-version-20220629114717-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220629114717-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220629114717-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220629114717-24356",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220629114717-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220629114717-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ad81cc98b0ebf2b160d8945fca2e2856e503fffc2084c3be728057b77e40b5b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59835"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59836"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59838"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59839"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0ad81cc98b0e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220629114717-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1f5e01895cc",
	                        "old-k8s-version-20220629114717-24356"
	                    ],
	                    "NetworkID": "7e2ec4ec0dd8da4d477d55acc03296107258203e7a7a266adf169e3b0ee9c64c",
	                    "EndpointID": "7041bb4c7eadd754f0ae15426e0376c2005b1379e2507d9f07e2b7d8eb3cb6d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 6 (447.772798ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:51:28.222885   39168 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220629114717-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220629114717-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (53.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.113865009s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0629 11:47:41.599193   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.105565406s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.108573822s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.112785467s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.103490822s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.111070589s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0629 11:48:22.565196   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.108961306s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (53.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220629114717-24356 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220629114717-24356 create -f testdata/busybox.yaml: exit status 1 (29.164316ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220629114717-24356" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220629114717-24356 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220629114717-24356
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220629114717-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2",
	        "Created": "2022-06-29T18:47:24.686705454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:47:25.036356976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2-json.log",
	        "Name": "/old-k8s-version-20220629114717-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220629114717-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220629114717-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220629114717-24356",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220629114717-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220629114717-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ad81cc98b0ebf2b160d8945fca2e2856e503fffc2084c3be728057b77e40b5b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59835"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59836"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59838"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59839"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0ad81cc98b0e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220629114717-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1f5e01895cc",
	                        "old-k8s-version-20220629114717-24356"
	                    ],
	                    "NetworkID": "7e2ec4ec0dd8da4d477d55acc03296107258203e7a7a266adf169e3b0ee9c64c",
	                    "EndpointID": "7041bb4c7eadd754f0ae15426e0376c2005b1379e2507d9f07e2b7d8eb3cb6d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 6 (444.418062ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:51:28.768918   39183 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220629114717-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220629114717-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220629114717-24356
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220629114717-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2",
	        "Created": "2022-06-29T18:47:24.686705454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:47:25.036356976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2-json.log",
	        "Name": "/old-k8s-version-20220629114717-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220629114717-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220629114717-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220629114717-24356",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220629114717-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220629114717-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ad81cc98b0ebf2b160d8945fca2e2856e503fffc2084c3be728057b77e40b5b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59835"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59836"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59838"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59839"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0ad81cc98b0e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220629114717-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1f5e01895cc",
	                        "old-k8s-version-20220629114717-24356"
	                    ],
	                    "NetworkID": "7e2ec4ec0dd8da4d477d55acc03296107258203e7a7a266adf169e3b0ee9c64c",
	                    "EndpointID": "7041bb4c7eadd754f0ae15426e0376c2005b1379e2507d9f07e2b7d8eb3cb6d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 6 (455.541474ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:51:29.298349   39195 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220629114717-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220629114717-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220629114717-24356 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0629 11:51:30.500604   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:51:30.776449   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:51:34.842273   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:55.323203   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:59.499131   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:59.504294   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:59.514936   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:59.535693   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:59.577111   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:59.659326   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:59.821584   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:00.142450   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:00.642284   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:52:00.783542   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:02.063780   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:04.624354   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:09.745384   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.292629   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.297988   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.309307   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.330969   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.371820   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.452955   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.613132   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:18.933293   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:19.599243   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:19.988013   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:20.879429   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:23.439716   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:28.333875   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:52:28.562206   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:36.286740   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:38.804776   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:40.469584   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:52:43.452534   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:52:52.699496   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220629114717-24356 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.145586536s)

                                                
                                                
-- stdout --
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220629114717-24356 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220629114717-24356 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220629114717-24356 describe deploy/metrics-server -n kube-system: exit status 1 (29.495625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220629114717-24356" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220629114717-24356 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220629114717-24356
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220629114717-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2",
	        "Created": "2022-06-29T18:47:24.686705454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227955,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:47:25.036356976Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2-json.log",
	        "Name": "/old-k8s-version-20220629114717-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220629114717-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220629114717-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220629114717-24356",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220629114717-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220629114717-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ad81cc98b0ebf2b160d8945fca2e2856e503fffc2084c3be728057b77e40b5b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59835"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59836"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59838"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59839"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0ad81cc98b0e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220629114717-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1f5e01895cc",
	                        "old-k8s-version-20220629114717-24356"
	                    ],
	                    "NetworkID": "7e2ec4ec0dd8da4d477d55acc03296107258203e7a7a266adf169e3b0ee9c64c",
	                    "EndpointID": "7041bb4c7eadd754f0ae15426e0376c2005b1379e2507d9f07e2b7d8eb3cb6d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 6 (443.784527ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:52:58.992926   39293 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220629114717-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220629114717-24356" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (492.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220629114717-24356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0629 11:53:21.431051   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:53:40.249297   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:53:46.661678   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:53:58.209579   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:54:14.346586   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:54:24.455409   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:54:43.354133   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220629114717-24356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m7.707011418s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220629114717-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220629114717-24356 in cluster old-k8s-version-20220629114717-24356
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220629114717-24356" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:53:01.020541   39321 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:53:01.020674   39321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:53:01.020678   39321 out.go:309] Setting ErrFile to fd 2...
	I0629 11:53:01.020682   39321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:53:01.021047   39321 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:53:01.021305   39321 out.go:303] Setting JSON to false
	I0629 11:53:01.036590   39321 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10349,"bootTime":1656518432,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:53:01.036679   39321 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:53:01.057889   39321 out.go:177] * [old-k8s-version-20220629114717-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:53:01.100418   39321 notify.go:193] Checking for updates...
	I0629 11:53:01.121817   39321 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:53:01.142983   39321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:53:01.164005   39321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:53:01.185015   39321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:53:01.206165   39321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:53:01.228648   39321 config.go:178] Loaded profile config "old-k8s-version-20220629114717-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:53:01.251012   39321 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	I0629 11:53:01.271945   39321 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:53:01.341174   39321 docker.go:137] docker version: linux-20.10.16
	I0629 11:53:01.341305   39321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:53:01.464360   39321 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:53:01.403963306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:53:01.486719   39321 out.go:177] * Using the docker driver based on existing profile
	I0629 11:53:01.529615   39321 start.go:284] selected driver: docker
	I0629 11:53:01.529644   39321 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:01.529795   39321 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:53:01.533103   39321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:53:01.655473   39321 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:53:01.595697353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:53:01.655650   39321 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:53:01.655668   39321 cni.go:95] Creating CNI manager for ""
	I0629 11:53:01.655678   39321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:53:01.655687   39321 start_flags.go:310] config:
	{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:01.677730   39321 out.go:177] * Starting control plane node old-k8s-version-20220629114717-24356 in cluster old-k8s-version-20220629114717-24356
	I0629 11:53:01.699300   39321 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:53:01.720322   39321 out.go:177] * Pulling base image ...
	I0629 11:53:01.762354   39321 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:53:01.762361   39321 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:53:01.762438   39321 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 11:53:01.762454   39321 cache.go:57] Caching tarball of preloaded images
	I0629 11:53:01.762660   39321 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:53:01.762692   39321 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0629 11:53:01.763793   39321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:53:01.827401   39321 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:53:01.827423   39321 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:53:01.827436   39321 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:53:01.827507   39321 start.go:352] acquiring machines lock for old-k8s-version-20220629114717-24356: {Name:mkeaf278b11a6771761242ef819919656a0edee3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:53:01.827595   39321 start.go:356] acquired machines lock for "old-k8s-version-20220629114717-24356" in 67.458µs
	I0629 11:53:01.827616   39321 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:53:01.827625   39321 fix.go:55] fixHost starting: 
	I0629 11:53:01.827860   39321 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:53:01.894263   39321 fix.go:103] recreateIfNeeded on old-k8s-version-20220629114717-24356: state=Stopped err=<nil>
	W0629 11:53:01.894295   39321 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:53:01.937823   39321 out.go:177] * Restarting existing docker container for "old-k8s-version-20220629114717-24356" ...
	I0629 11:53:01.958803   39321 cli_runner.go:164] Run: docker start old-k8s-version-20220629114717-24356
	I0629 11:53:02.302625   39321 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:53:02.379116   39321 kic.go:416] container "old-k8s-version-20220629114717-24356" state is running.
	I0629 11:53:02.379733   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:02.458199   39321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:53:02.458585   39321 machine.go:88] provisioning docker machine ...
	I0629 11:53:02.458625   39321 ubuntu.go:169] provisioning hostname "old-k8s-version-20220629114717-24356"
	I0629 11:53:02.458691   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:02.536976   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:02.537219   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:02.537234   39321 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220629114717-24356 && echo "old-k8s-version-20220629114717-24356" | sudo tee /etc/hostname
	I0629 11:53:02.664885   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220629114717-24356
	
	I0629 11:53:02.664959   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:02.738843   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:02.739033   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:02.739051   39321 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220629114717-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220629114717-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220629114717-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:53:02.858236   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:53:02.858255   39321 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:53:02.858272   39321 ubuntu.go:177] setting up certificates
	I0629 11:53:02.858281   39321 provision.go:83] configureAuth start
	I0629 11:53:02.858345   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:02.929876   39321 provision.go:138] copyHostCerts
	I0629 11:53:02.929998   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:53:02.930014   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:53:02.930137   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:53:02.930410   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:53:02.930419   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:53:02.930485   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:53:02.930681   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:53:02.930688   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:53:02.930750   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:53:02.930868   39321 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220629114717-24356 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220629114717-24356]
	I0629 11:53:03.099477   39321 provision.go:172] copyRemoteCerts
	I0629 11:53:03.099537   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:53:03.099583   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.171561   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:03.259681   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:53:03.277353   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0629 11:53:03.294474   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 11:53:03.311679   39321 provision.go:86] duration metric: configureAuth took 453.364787ms
	I0629 11:53:03.311691   39321 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:53:03.311820   39321 config.go:178] Loaded profile config "old-k8s-version-20220629114717-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:53:03.311873   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.383560   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.383791   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.383829   39321 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:53:03.505174   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:53:03.505190   39321 ubuntu.go:71] root file system type: overlay
	I0629 11:53:03.505337   39321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:53:03.505412   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.576780   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.576940   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.576993   39321 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:53:03.702032   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:53:03.702109   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.773428   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.773587   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.773602   39321 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:53:03.895380   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:53:03.895393   39321 machine.go:91] provisioned docker machine in 1.436757152s
	I0629 11:53:03.895403   39321 start.go:306] post-start starting for "old-k8s-version-20220629114717-24356" (driver="docker")
	I0629 11:53:03.895408   39321 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:53:03.895461   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:53:03.895508   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.971006   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.056695   39321 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:53:04.060270   39321 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:53:04.060284   39321 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:53:04.060291   39321 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:53:04.060295   39321 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:53:04.060306   39321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:53:04.060434   39321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:53:04.060599   39321 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:53:04.060774   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:53:04.067711   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:53:04.085232   39321 start.go:309] post-start completed in 189.815092ms
	I0629 11:53:04.085301   39321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:53:04.085359   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.156347   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.238000   39321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:53:04.242481   39321 fix.go:57] fixHost completed within 2.414782183s
	I0629 11:53:04.242492   39321 start.go:81] releasing machines lock for "old-k8s-version-20220629114717-24356", held for 2.414817597s
	I0629 11:53:04.242573   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:04.313552   39321 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:53:04.313558   39321 ssh_runner.go:195] Run: systemctl --version
	I0629 11:53:04.313633   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.313644   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.389089   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.391746   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.950787   39321 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:53:04.961037   39321 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:53:04.961098   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:53:04.972557   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:53:04.985220   39321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:53:05.057913   39321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:53:05.127457   39321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:53:05.201096   39321 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:53:05.403377   39321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:53:05.442119   39321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:53:05.520315   39321 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0629 11:53:05.520496   39321 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220629114717-24356 dig +short host.docker.internal
	I0629 11:53:05.646740   39321 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:53:05.646853   39321 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:53:05.651058   39321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:53:05.662556   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:05.733785   39321 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:53:05.733877   39321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:53:05.763532   39321 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:53:05.763547   39321 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:53:05.763613   39321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:53:05.793235   39321 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:53:05.793253   39321 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:53:05.793340   39321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:53:05.867180   39321 cni.go:95] Creating CNI manager for ""
	I0629 11:53:05.867191   39321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:53:05.867206   39321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:53:05.867219   39321 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220629114717-24356 NodeName:old-k8s-version-20220629114717-24356 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:53:05.867334   39321 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220629114717-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220629114717-24356
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:53:05.867405   39321 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220629114717-24356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:53:05.867467   39321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0629 11:53:05.874886   39321 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:53:05.874948   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:53:05.881929   39321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0629 11:53:05.894526   39321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:53:05.906971   39321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0629 11:53:05.919357   39321 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:53:05.923010   39321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:53:05.934256   39321 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356 for IP: 192.168.76.2
	I0629 11:53:05.934374   39321 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:53:05.934432   39321 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:53:05.934518   39321 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.key
	I0629 11:53:05.934586   39321 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key.31bdca25
	I0629 11:53:05.934644   39321 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key
	I0629 11:53:05.934860   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:53:05.934902   39321 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:53:05.934916   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:53:05.934951   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:53:05.934990   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:53:05.935032   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:53:05.935095   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:53:05.935616   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:53:05.952783   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 11:53:05.969962   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:53:05.986903   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:53:06.004120   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:53:06.022586   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:53:06.059761   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:53:06.076874   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:53:06.093750   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:53:06.110970   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:53:06.128088   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:53:06.146358   39321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:53:06.159473   39321 ssh_runner.go:195] Run: openssl version
	I0629 11:53:06.164773   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:53:06.172822   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.176828   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.176875   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.182239   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:53:06.189362   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:53:06.197559   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.201505   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.201555   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.207119   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:53:06.214849   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:53:06.222597   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.226582   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.226621   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.231864   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:53:06.239364   39321 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:06.239478   39321 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:53:06.268678   39321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:53:06.276184   39321 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:53:06.276201   39321 kubeadm.go:626] restartCluster start
	I0629 11:53:06.276249   39321 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:53:06.282969   39321 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.283027   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:06.354486   39321 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220629114717-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:53:06.354648   39321 kubeconfig.go:127] "old-k8s-version-20220629114717-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 11:53:06.354967   39321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:53:06.356063   39321 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:53:06.363888   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.363980   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.372296   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.572897   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.573039   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.583383   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.773156   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.773259   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.783501   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.972425   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.972514   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.981322   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.173227   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.173323   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.183915   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.373230   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.373327   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.383900   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.573955   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.574107   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.584389   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.774471   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.774706   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.784989   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.972462   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.972554   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.982777   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.172517   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.172614   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.183424   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.372918   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.373101   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.383561   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.572500   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.572573   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.582518   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.772633   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.772771   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.783206   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.972740   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.972875   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.983311   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.172733   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.172846   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.183530   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.372639   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.372862   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.383814   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.383824   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.383870   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.392053   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.392064   39321 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 11:53:09.392072   39321 kubeadm.go:1092] stopping kube-system containers ...
	I0629 11:53:09.392131   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:53:09.420212   39321 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 11:53:09.433676   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:53:09.441303   39321 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun 29 18:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jun 29 18:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jun 29 18:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jun 29 18:49 /etc/kubernetes/scheduler.conf
	
	I0629 11:53:09.441356   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 11:53:09.448705   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 11:53:09.455863   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 11:53:09.463598   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 11:53:09.470944   39321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:53:09.479430   39321 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 11:53:09.479451   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:09.530261   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.632194   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.101882408s)
	I0629 11:53:10.632212   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.847331   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.904889   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.963035   39321 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:53:10.963098   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:11.471629   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:11.971653   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:12.471604   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:12.973656   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.471720   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.971792   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:14.473862   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:14.972657   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:15.472511   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:15.973033   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:16.472375   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:16.972679   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:17.471980   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:17.972744   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.472610   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.972373   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:19.471947   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:19.972438   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:20.472581   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:20.972723   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:21.473577   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:21.972016   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.472026   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.973315   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:23.471896   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:23.972447   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:24.471973   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:24.973386   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:25.473637   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:25.972648   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:26.472198   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:26.972657   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:27.472346   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:27.972638   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:28.473151   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:28.972205   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.472234   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.972717   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:30.472697   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:30.972995   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:31.472433   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:31.972406   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:32.472190   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:32.974199   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.472460   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.972993   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:34.472909   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:34.972289   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:35.473152   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:35.972577   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:36.474436   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:36.973628   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.472308   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.973415   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:38.472767   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:38.974410   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:39.473141   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:39.972605   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:40.472482   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:40.972864   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:41.472723   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:41.974616   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:42.472627   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:42.972675   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:43.472686   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:43.973714   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.473536   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.973783   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:45.472730   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:45.972999   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:46.473581   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:46.973015   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:47.472857   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:47.972929   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:48.474126   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:48.972902   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.472981   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.972804   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:50.473092   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:50.973396   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:51.473121   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:51.973014   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:52.473008   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:52.973431   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.472906   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.973182   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:54.473436   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:54.974299   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:55.473284   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:55.973150   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:56.474409   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:56.973527   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:57.472991   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:57.972998   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.473348   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.973142   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:59.473282   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:59.973927   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:00.473094   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:00.974069   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:01.474438   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:01.973191   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:02.473214   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:02.973108   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.475258   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.974208   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:04.473408   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:04.975325   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:05.473242   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:05.974115   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:06.474575   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:06.973453   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:07.473535   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:07.973316   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:08.473278   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:08.974032   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:09.473400   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:09.973400   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:10.473858   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:10.973493   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:11.005027   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.005047   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:11.005174   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:11.034514   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.044684   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:11.044771   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:11.074864   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.074876   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:11.074948   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:11.107049   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.107060   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:11.107125   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:11.136126   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.136137   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:11.136202   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:11.166106   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.166123   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:11.166197   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:11.195233   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.195244   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:11.195311   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:11.224314   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.224326   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:11.224333   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:11.224341   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:11.238284   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:11.238295   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:13.292784   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054415695s)
	I0629 11:54:13.292934   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:13.292941   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:13.333282   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:13.333295   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:13.345303   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:13.345316   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:13.397489   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:15.899245   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:15.973676   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:16.003497   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.003509   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:16.003567   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:16.033526   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.044819   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:16.044901   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:16.076936   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.076948   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:16.077013   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:16.107083   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.107095   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:16.107151   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:16.138323   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.138335   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:16.138389   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:16.167336   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.167348   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:16.167417   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:16.198137   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.198149   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:16.198204   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:16.227979   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.227992   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:16.227999   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:16.228012   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:16.267349   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:16.267364   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:16.279505   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:16.279520   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:16.331710   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:16.331728   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:16.331736   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:16.345394   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:16.345405   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:18.399883   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05440587s)
	I0629 11:54:20.900466   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:20.973806   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:21.004342   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.004356   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:21.004415   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:21.034479   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.045019   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:21.045125   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:21.075792   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.075805   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:21.075876   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:21.113638   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.113651   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:21.113708   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:21.143417   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.143429   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:21.143492   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:21.172595   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.172607   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:21.172672   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:21.201866   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.201878   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:21.201937   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:21.230654   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.230664   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:21.230671   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:21.230677   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:21.271551   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:21.271572   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:21.284291   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:21.284305   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:21.340570   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:21.340584   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:21.340593   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:21.354206   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:21.354218   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:23.410357   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056065961s)
	I0629 11:54:25.911253   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:25.974183   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:26.006527   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.006539   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:26.006593   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:26.034855   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.045013   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:26.045108   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:26.075260   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.075272   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:26.075332   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:26.104633   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.104645   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:26.104702   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:26.134389   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.134402   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:26.134460   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:26.165666   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.165678   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:26.165744   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:26.196944   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.196959   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:26.197023   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:26.224887   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.224902   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:26.224910   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:26.224917   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:26.264545   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:26.264559   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:26.275868   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:26.275882   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:26.329330   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:26.329346   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:26.329353   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:26.343299   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:26.343311   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:28.396021   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052636665s)
	I0629 11:54:30.896828   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:30.973978   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:31.008212   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.008225   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:31.008285   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:31.041367   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.045055   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:31.045123   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:31.077818   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.077830   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:31.077893   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:31.108115   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.108128   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:31.108192   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:31.138455   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.138469   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:31.138532   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:31.169314   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.169329   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:31.169389   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:31.199503   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.199515   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:31.199584   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:31.230870   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.230884   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:31.230893   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:31.230912   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:31.274860   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:31.274876   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:31.289572   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:31.289588   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:31.345087   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:31.345100   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:31.345106   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:31.362082   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:31.362095   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:33.419132   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056963483s)
	I0629 11:54:35.919752   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:35.976084   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:36.006737   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.006750   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:36.006814   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:36.036631   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.045922   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:36.045984   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:36.075280   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.075293   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:36.075359   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:36.105709   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.105720   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:36.105789   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:36.135433   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.135445   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:36.135509   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:36.164044   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.164057   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:36.164116   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:36.193256   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.193269   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:36.193331   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:36.221611   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.221623   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:36.221630   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:36.221636   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:36.261723   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:36.261740   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:36.273915   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:36.273934   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:36.332462   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:36.332479   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:36.332487   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:36.346115   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:36.346128   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:38.400565   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054363884s)
	I0629 11:54:40.901227   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:40.976044   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:41.005727   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.005739   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:41.005796   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:41.036553   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.045422   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:41.045478   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:41.075203   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.075216   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:41.075276   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:41.108156   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.108168   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:41.108227   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:41.137946   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.137957   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:41.138020   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:41.167765   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.167777   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:41.167846   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:41.197634   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.197645   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:41.197700   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:41.226006   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.226019   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:41.226025   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:41.226036   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:41.278933   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:41.278945   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:41.278952   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:41.292648   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:41.292661   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:43.349789   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057054339s)
	I0629 11:54:43.349901   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:43.349908   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:43.389415   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:43.389428   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:45.901944   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:45.976279   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:46.007239   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.007251   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:46.007317   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:46.038729   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.045289   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:46.045348   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:46.080579   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.080656   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:46.080727   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:46.110618   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.110630   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:46.110691   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:46.139982   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.139994   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:46.140049   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:46.168606   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.168620   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:46.168685   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:46.198162   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.198175   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:46.198238   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:46.226969   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.226980   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:46.226987   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:46.226995   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:48.280086   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053017479s)
	I0629 11:54:48.280198   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:48.280208   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:48.321498   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:48.321516   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:48.333730   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:48.333746   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:48.386942   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:48.386954   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:48.386963   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:50.902020   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:50.976006   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:51.016056   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.016066   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:51.016114   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:51.048022   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.048034   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:51.048093   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:51.081074   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.081085   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:51.081143   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:51.112957   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.112968   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:51.113030   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:51.145997   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.146009   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:51.146068   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:51.176395   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.176407   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:51.176469   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:51.208630   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.208645   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:51.208708   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:51.239987   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.240003   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:51.240012   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:51.240021   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:51.287920   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:51.287939   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:51.302964   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:51.302985   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:51.362169   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:51.362179   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:51.362186   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:51.376235   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:51.376248   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:53.427692   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051370993s)
	I0629 11:54:55.928476   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:55.976666   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:56.005708   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.005720   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:56.005780   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:56.034443   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.049359   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:56.049422   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:56.078685   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.078697   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:56.078752   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:56.119131   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.119143   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:56.119202   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:56.147731   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.147743   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:56.147801   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:56.176982   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.176994   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:56.177049   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:56.205600   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.205613   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:56.205667   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:56.234552   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.234564   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:56.234570   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:56.234576   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:56.275806   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:56.275822   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:56.288255   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:56.288270   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:56.343278   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:56.343289   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:56.343296   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:56.357151   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:56.357163   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:58.409308   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052071728s)
	I0629 11:55:00.909863   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:00.975039   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:01.009426   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.009439   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:01.009500   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:01.058626   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.058638   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:01.058715   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:01.096270   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.096285   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:01.096370   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:01.130375   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.130388   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:01.130446   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:01.167367   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.167379   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:01.167443   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:01.200318   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.200330   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:01.200390   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:01.231557   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.231570   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:01.231629   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:01.266142   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.266179   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:01.266211   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:01.266225   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:03.348388   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.082087684s)
	I0629 11:55:03.348526   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:03.348534   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:03.393758   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:03.393788   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:03.412557   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:03.412576   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:03.479793   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:03.479808   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:03.479818   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:05.995421   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:06.477124   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:06.508598   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.508609   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:06.508668   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:06.571634   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.571648   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:06.571709   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:06.603733   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.603750   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:06.603821   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:06.641504   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.641540   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:06.641612   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:06.680642   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.680654   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:06.680718   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:06.719154   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.719166   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:06.719243   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:06.752660   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.752672   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:06.752781   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:06.790338   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.790350   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:06.790357   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:06.790364   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:06.839137   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:06.839156   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:06.855958   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:06.855978   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:06.924265   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:06.924279   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:06.924285   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:06.947627   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:06.947646   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:09.012320   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064598664s)
	I0629 11:55:11.512790   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:11.975458   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:12.007895   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.007907   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:12.007963   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:12.039685   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.039696   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:12.039751   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:12.068287   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.068306   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:12.068380   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:12.097250   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.097262   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:12.097329   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:12.125908   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.125920   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:12.125974   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:12.155445   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.155457   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:12.155513   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:12.185314   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.185326   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:12.185383   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:12.214629   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.214639   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:12.214646   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:12.214653   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:12.271182   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:12.271194   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:12.271204   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:12.286914   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:12.286928   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:14.343425   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056423824s)
	I0629 11:55:14.343535   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:14.343543   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:14.383870   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:14.383883   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:16.897690   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:16.976654   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:17.012584   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.012596   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:17.012657   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:17.044046   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.044058   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:17.044124   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:17.074296   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.074308   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:17.074365   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:17.115757   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.115768   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:17.115824   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:17.145895   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.145906   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:17.145962   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:17.175767   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.175777   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:17.175843   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:17.205469   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.205480   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:17.205540   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:17.234651   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.234663   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:17.234670   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:17.234677   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:17.277938   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:17.277952   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:17.289697   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:17.289715   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:17.341609   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:17.341618   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:17.341625   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:17.355655   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:17.355667   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:19.408285   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052537682s)
	I0629 11:55:21.910724   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:21.975500   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:22.004837   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.004854   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:22.004921   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:22.035732   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.035743   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:22.035801   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:22.069625   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.069636   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:22.069692   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:22.099818   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.099832   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:22.099880   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:22.130176   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.130188   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:22.130247   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:22.162002   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.162019   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:22.162078   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:22.190365   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.190379   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:22.190442   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:22.219748   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.219761   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:22.219767   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:22.219777   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:22.273321   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:22.273337   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:22.273352   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:22.287787   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:22.287800   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:24.342535   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054658523s)
	I0629 11:55:24.342644   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:24.342651   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:24.382581   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:24.382593   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:26.895697   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:26.977747   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:27.008926   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.008938   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:27.009000   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:27.038100   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.038111   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:27.038168   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:27.067169   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.067180   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:27.067236   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:27.095625   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.095637   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:27.095694   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:27.125107   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.125118   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:27.125175   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:27.154968   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.154982   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:27.155040   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:27.183779   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.183791   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:27.183850   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:27.212801   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.212813   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:27.212820   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:27.212827   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:27.253498   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:27.253514   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:27.265985   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:27.266001   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:27.322114   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:27.322123   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:27.322130   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:27.335806   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:27.335821   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:29.392403   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056508883s)
	I0629 11:55:31.893240   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:31.977413   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:32.008956   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.008971   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:32.009028   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:32.038201   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.038212   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:32.038267   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:32.066990   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.067002   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:32.067057   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:32.097577   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.097593   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:32.097667   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:32.127554   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.127567   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:32.127629   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:32.156429   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.156443   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:32.156507   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:32.185611   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.185623   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:32.185681   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:32.214323   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.214335   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:32.214342   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:32.214348   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:32.267585   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:32.267595   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:32.267601   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:32.282076   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:32.282088   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:34.339416   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057253442s)
	I0629 11:55:34.339525   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:34.339531   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:34.379921   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:34.379933   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:36.894519   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:36.975922   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:37.010242   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.010263   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:37.010330   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:37.040881   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.040893   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:37.040949   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:37.070230   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.070242   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:37.070308   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:37.101292   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.101303   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:37.101353   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:37.131101   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.131113   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:37.131173   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:37.159540   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.159552   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:37.159610   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:37.189520   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.189532   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:37.189588   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:37.219222   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.219233   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:37.219241   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:37.219248   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:37.259017   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:37.259032   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:37.270684   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:37.270696   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:37.322386   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:37.322399   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:37.322407   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:37.335982   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:37.335995   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:39.390442   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054372053s)
	I0629 11:55:41.891223   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:41.978245   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:42.009313   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.009326   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:42.009380   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:42.039076   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.039089   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:42.039146   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:42.068464   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.068478   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:42.068534   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:42.097800   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.097811   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:42.097866   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:42.127026   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.127038   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:42.127093   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:42.156370   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.156382   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:42.156444   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:42.186834   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.186846   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:42.186901   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:42.215822   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.215835   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:42.215846   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:42.215855   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:42.230305   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:42.230319   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:44.285629   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055236751s)
	I0629 11:55:44.285764   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:44.285771   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:44.325646   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:44.325660   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:44.337146   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:44.337159   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:44.389786   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:46.891554   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:46.978341   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:47.009917   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.009929   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:47.009985   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:47.038523   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.038534   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:47.038588   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:47.067903   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.067915   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:47.067970   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:47.098087   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.098099   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:47.098155   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:47.127152   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.127164   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:47.127220   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:47.157028   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.157039   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:47.157096   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:47.186471   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.186483   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:47.186541   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:47.215975   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.215988   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:47.215997   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:47.216004   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:47.256256   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:47.256268   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:47.268708   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:47.268721   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:47.320566   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:47.320577   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:47.320583   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:47.334197   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:47.334209   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:49.391366   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057082304s)
	I0629 11:55:51.893853   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:51.976453   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:52.006330   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.006344   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:52.006418   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:52.036416   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.036428   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:52.036489   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:52.065995   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.066007   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:52.066062   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:52.095567   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.095579   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:52.095639   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:52.125457   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.125470   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:52.125526   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:52.154476   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.154488   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:52.154545   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:52.183063   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.183074   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:52.183133   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:52.212690   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.212702   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:52.212708   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:52.212715   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:52.253322   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:52.253336   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:52.264898   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:52.264911   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:52.317711   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:52.317722   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:52.317729   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:52.331473   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:52.331486   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:54.387012   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055452409s)
	I0629 11:55:56.889424   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:56.978656   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:57.009805   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.009819   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:57.009887   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:57.038560   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.038572   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:57.038628   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:57.067167   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.067179   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:57.067242   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:57.095884   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.095896   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:57.095954   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:57.125648   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.125660   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:57.125717   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:57.157517   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.157531   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:57.157587   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:57.190283   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.190296   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:57.190357   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:57.221529   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.221543   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:57.221550   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:57.221559   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:57.283015   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:57.283028   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:57.283037   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:57.298819   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:57.298833   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:59.359979   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061069192s)
	I0629 11:55:59.360122   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:59.360130   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:59.403714   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:59.403731   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:01.924292   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:01.977361   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:02.008503   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.008514   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:02.008579   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:02.037695   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.037707   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:02.037764   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:02.068233   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.068246   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:02.068304   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:02.101131   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.101145   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:02.101192   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:02.140669   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.140686   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:02.140754   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:02.172586   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.172597   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:02.172657   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:02.202584   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.202595   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:02.202649   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:02.231310   39321 logs.go:274] 0 containers: []
	W0629 11:56:02.231322   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:02.231328   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:02.231335   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:02.245830   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:02.245842   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:04.305978   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060063938s)
	I0629 11:56:04.306087   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:04.306093   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:04.350848   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:04.350865   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:04.364754   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:04.364769   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:04.421057   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:06.921498   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:06.976946   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:07.006617   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.006634   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:07.006697   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:07.035332   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.035343   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:07.035402   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:07.066222   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.066234   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:07.066288   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:07.096497   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.096507   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:07.096567   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:07.128718   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.128730   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:07.128788   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:07.158497   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.158509   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:07.158566   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:07.188199   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.188212   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:07.188276   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:07.221837   39321 logs.go:274] 0 containers: []
	W0629 11:56:07.221850   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:07.221857   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:07.221865   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:07.280426   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:07.280437   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:07.280444   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:07.294159   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:07.294171   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:09.349657   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055411582s)
	I0629 11:56:09.349765   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:09.349773   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:09.390388   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:09.390418   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:11.904018   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:11.977485   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:12.008870   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.008885   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:12.008943   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:12.039518   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.039529   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:12.039584   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:12.069785   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.069797   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:12.069854   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:12.098535   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.098547   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:12.098601   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:12.129838   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.129850   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:12.129904   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:12.160692   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.160703   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:12.160764   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:12.189825   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.189838   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:12.189895   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:12.220410   39321 logs.go:274] 0 containers: []
	W0629 11:56:12.220423   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:12.220431   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:12.220437   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:12.235214   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:12.235228   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:14.302756   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.067455496s)
	I0629 11:56:14.302869   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:14.302876   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:14.354929   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:14.354952   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:14.371004   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:14.371029   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:14.453331   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:16.954699   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:16.977181   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:17.029799   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.029815   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:17.029890   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:17.067128   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.067140   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:17.067206   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:17.101085   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.101100   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:17.101157   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:17.141275   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.141287   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:17.141349   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:17.176008   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.176020   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:17.176082   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:17.218745   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.218758   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:17.218817   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:17.251487   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.251499   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:17.251558   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:17.283979   39321 logs.go:274] 0 containers: []
	W0629 11:56:17.283991   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:17.283999   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:17.284007   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:17.299440   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:17.299455   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:17.366240   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:17.366253   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:17.366260   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:17.382085   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:17.382101   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:19.451281   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.069105252s)
	I0629 11:56:19.451436   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:19.451445   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:22.002255   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:22.477653   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:22.509911   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.509924   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:22.509997   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:22.542700   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.542716   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:22.542772   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:22.575951   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.575966   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:22.576029   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:22.607676   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.607687   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:22.607743   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:22.636109   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.636121   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:22.636193   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:22.669494   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.669507   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:22.669563   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:22.701146   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.701158   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:22.701216   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:22.729354   39321 logs.go:274] 0 containers: []
	W0629 11:56:22.729366   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:22.729372   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:22.729379   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:22.744459   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:22.744472   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:24.799772   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055226839s)
	I0629 11:56:24.799879   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:24.799886   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:24.844410   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:24.844432   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:24.857438   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:24.857451   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:24.924718   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:27.425771   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:27.477975   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:27.516413   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.516425   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:27.516488   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:27.547920   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.547935   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:27.548001   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:27.579810   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.579823   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:27.579880   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:27.609004   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.609017   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:27.609076   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:27.642619   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.642631   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:27.642696   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:27.675523   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.675537   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:27.675604   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:27.708569   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.708583   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:27.708644   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:27.742965   39321 logs.go:274] 0 containers: []
	W0629 11:56:27.742979   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:27.742987   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:27.742995   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:27.786616   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:27.786630   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:27.799726   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:27.799740   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:27.858467   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:27.858478   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:27.858486   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:27.874183   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:27.874197   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:29.927744   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053473931s)
	I0629 11:56:32.428610   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:32.477791   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:32.508592   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.508604   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:32.508660   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:32.535533   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.535545   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:32.535601   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:32.563345   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.563356   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:32.563412   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:32.591552   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.591564   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:32.591620   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:32.619133   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.619148   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:32.619212   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:32.646987   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.646998   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:32.647059   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:32.676622   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.676634   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:32.676692   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:32.705736   39321 logs.go:274] 0 containers: []
	W0629 11:56:32.705752   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:32.705763   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:32.705772   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:32.748977   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:32.748996   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:32.761231   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:32.761246   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:32.813606   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:32.813622   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:32.813629   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:32.827451   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:32.827464   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:34.882306   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054762143s)
	I0629 11:56:37.382707   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:37.478631   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:37.509192   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.509205   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:37.509266   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:37.539379   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.539392   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:37.539451   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:37.572035   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.572050   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:37.572113   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:37.604350   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.604362   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:37.604424   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:37.634931   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.634945   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:37.635005   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:37.672248   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.672260   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:37.672322   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:37.705747   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.705761   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:37.705822   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:37.736079   39321 logs.go:274] 0 containers: []
	W0629 11:56:37.736091   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:37.736098   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:37.736105   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:39.788879   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052700971s)
	I0629 11:56:39.788985   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:39.788992   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:39.831749   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:39.831765   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:39.844109   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:39.844125   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:39.897427   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:39.897438   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:39.897446   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:42.411569   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:42.478201   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:42.508036   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.508050   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:42.508104   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:42.537322   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.537342   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:42.537408   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:42.567154   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.567174   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:42.567265   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:42.598195   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.598208   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:42.598268   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:42.626756   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.626768   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:42.626828   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:42.655173   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.655186   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:42.655250   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:42.685537   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.685550   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:42.685614   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:42.714668   39321 logs.go:274] 0 containers: []
	W0629 11:56:42.714681   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:42.714688   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:42.714696   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:42.755751   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:42.755764   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:42.769347   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:42.769360   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:42.829516   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:42.829528   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:42.829536   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:42.843298   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:42.843311   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:44.901408   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05802359s)
	I0629 11:56:47.402292   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:47.478130   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:47.508967   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.508979   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:47.509036   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:47.540342   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.540356   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:47.540418   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:47.573963   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.573977   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:47.574051   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:47.605181   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.605195   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:47.605257   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:47.634638   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.634651   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:47.634707   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:47.663911   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.663922   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:47.663979   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:47.695046   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.695059   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:47.695115   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:47.725198   39321 logs.go:274] 0 containers: []
	W0629 11:56:47.725210   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:47.725217   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:47.725223   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:47.765798   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:47.765812   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:47.777622   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:47.777635   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:47.828560   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:47.828570   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:47.828577   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:47.842216   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:47.842229   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:49.896879   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054577218s)
	I0629 11:56:52.398022   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:52.478442   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:52.509051   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.509062   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:52.509122   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:52.537285   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.537297   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:52.537359   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:52.567965   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.567977   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:52.568041   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:52.598001   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.598013   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:52.598071   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:52.625772   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.625784   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:52.625841   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:52.653773   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.653786   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:52.653846   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:52.682357   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.682370   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:52.682426   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:52.711233   39321 logs.go:274] 0 containers: []
	W0629 11:56:52.711247   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:52.711256   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:52.711266   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:54.765152   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053808741s)
	I0629 11:56:54.765261   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:54.765268   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:56:54.806654   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:54.806675   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:54.823000   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:54.823014   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:54.881179   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:54.881191   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:54.881199   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:57.399080   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:56:57.478402   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:56:57.519439   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.519452   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:56:57.519510   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:56:57.550387   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.550399   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:56:57.550456   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:56:57.580186   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.580198   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:56:57.580255   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:56:57.609650   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.609664   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:56:57.609729   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:56:57.639061   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.639074   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:56:57.639133   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:56:57.668632   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.668644   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:56:57.668702   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:56:57.698014   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.698026   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:56:57.698085   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:56:57.727684   39321 logs.go:274] 0 containers: []
	W0629 11:56:57.727695   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:56:57.727702   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:56:57.727712   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:56:57.739649   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:56:57.739662   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:56:57.794506   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:56:57.794523   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:56:57.794530   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:56:57.808383   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:56:57.808396   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:56:59.867250   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058780497s)
	I0629 11:56:59.867356   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:56:59.867363   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:57:02.410667   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:02.479075   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:57:02.513534   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.513548   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:57:02.513617   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:57:02.547824   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.547837   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:57:02.547893   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:57:02.584011   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.584024   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:57:02.584087   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:57:02.614083   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.614095   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:57:02.614152   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:57:02.643350   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.643361   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:57:02.643420   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:57:02.676992   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.677004   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:57:02.677067   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:57:02.707295   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.707311   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:57:02.707371   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:57:02.739944   39321 logs.go:274] 0 containers: []
	W0629 11:57:02.739956   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:57:02.739963   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:57:02.739969   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:57:02.780420   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:57:02.780437   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:57:02.792345   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:57:02.792358   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:57:02.845029   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:57:02.845042   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:57:02.845049   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:57:02.859531   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:57:02.859543   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:57:04.913035   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053416638s)
	I0629 11:57:07.413448   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:07.479212   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:57:07.515894   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.515908   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:57:07.515981   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:57:07.547790   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.547804   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:57:07.547873   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:57:07.577978   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.577990   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:57:07.578048   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:57:07.606645   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.606657   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:57:07.606717   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:57:07.636428   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.636442   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:57:07.636501   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:57:07.664826   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.664838   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:57:07.664895   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:57:07.692967   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.692979   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:57:07.693035   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:57:07.722287   39321 logs.go:274] 0 containers: []
	W0629 11:57:07.722299   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:57:07.722305   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:57:07.722311   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:57:07.761762   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:57:07.761775   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:57:07.774034   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:57:07.774049   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:57:07.827402   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:57:07.827414   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:57:07.827421   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:57:07.841132   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:57:07.841147   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:57:09.896074   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054851918s)
	I0629 11:57:12.396500   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:12.406510   39321 kubeadm.go:630] restartCluster took 4m6.122915265s
	W0629 11:57:12.406594   39321 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0629 11:57:12.406611   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:57:12.831183   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:57:12.845047   39321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:57:12.857364   39321 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:57:12.857423   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:57:12.870843   39321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:57:12.870871   39321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:57:13.813987   39321 out.go:204]   - Generating certificates and keys ...
	I0629 11:57:14.349791   39321 out.go:204]   - Booting up control plane ...
	W0629 11:59:09.269281   39321 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:59:09.269312   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:59:09.691823   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:59:09.701755   39321 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:59:09.701805   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:59:09.709759   39321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:59:09.709777   39321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:59:10.453324   39321 out.go:204]   - Generating certificates and keys ...
	I0629 11:59:11.075112   39321 out.go:204]   - Booting up control plane ...
	I0629 12:01:06.018998   39321 kubeadm.go:397] StartCluster complete in 7m59.760603139s
	I0629 12:01:06.019078   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 12:01:06.047361   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.083489   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 12:01:06.083580   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 12:01:06.118045   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.118058   39321 logs.go:276] No container was found matching "etcd"
	I0629 12:01:06.118119   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 12:01:06.148512   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.148524   39321 logs.go:276] No container was found matching "coredns"
	I0629 12:01:06.148587   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 12:01:06.177707   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.177719   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 12:01:06.177776   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 12:01:06.210822   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.210835   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 12:01:06.210895   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 12:01:06.243800   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.243812   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 12:01:06.243868   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 12:01:06.274291   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.274305   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 12:01:06.274368   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 12:01:06.308104   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.308119   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 12:01:06.308126   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 12:01:06.308133   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 12:01:06.347949   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 12:01:06.347968   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 12:01:06.361249   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 12:01:06.361264   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 12:01:06.413780   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 12:01:06.413793   39321 logs.go:123] Gathering logs for Docker ...
	I0629 12:01:06.413800   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 12:01:06.427622   39321 logs.go:123] Gathering logs for container status ...
	I0629 12:01:06.427633   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 12:01:08.487011   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059302402s)
	W0629 12:01:08.487125   39321 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 12:01:08.487150   39321 out.go:239] * 
	* 
	W0629 12:01:08.487259   39321 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.487274   39321 out.go:239] * 
	* 
	W0629 12:01:08.487946   39321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 12:01:08.550616   39321 out.go:177] 
	W0629 12:01:08.592802   39321 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.592939   39321 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 12:01:08.593004   39321 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 12:01:08.634371   39321 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220629114717-24356 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220629114717-24356
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220629114717-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2",
	        "Created": "2022-06-29T18:47:24.686705454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:53:02.298159951Z",
	            "FinishedAt": "2022-06-29T18:52:59.492186161Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2-json.log",
	        "Name": "/old-k8s-version-20220629114717-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220629114717-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220629114717-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220629114717-24356",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220629114717-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220629114717-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f01a004add6a38bbd2eeef63591d683ecdc0a86e7e09d3f450b9f36251384a44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60321"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60322"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60323"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60324"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60325"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f01a004add6a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220629114717-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1f5e01895cc",
	                        "old-k8s-version-20220629114717-24356"
	                    ],
	                    "NetworkID": "7e2ec4ec0dd8da4d477d55acc03296107258203e7a7a266adf169e3b0ee9c64c",
	                    "EndpointID": "5c3ab2122cf8bbb30617dcaafec5da849a4b6aecffda698851a0bf59a65b2b47",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (452.416053ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220629114717-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220629114717-24356 logs -n 25: (3.58474537s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:46 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:48 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:54 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:51 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:52 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT |                     |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 11:57:26
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 11:57:26.028245   39984 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:57:26.028421   39984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:57:26.028426   39984 out.go:309] Setting ErrFile to fd 2...
	I0629 11:57:26.028430   39984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:57:26.028744   39984 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:57:26.029007   39984 out.go:303] Setting JSON to false
	I0629 11:57:26.044844   39984 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10614,"bootTime":1656518432,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:57:26.044930   39984 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:57:26.071215   39984 out.go:177] * [embed-certs-20220629115611-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:57:26.114439   39984 notify.go:193] Checking for updates...
	I0629 11:57:26.136279   39984 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:57:26.158396   39984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:57:26.180197   39984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:57:26.201576   39984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:57:26.223504   39984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:57:26.245798   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:57:26.246444   39984 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:57:26.316909   39984 docker.go:137] docker version: linux-20.10.16
	I0629 11:57:26.317080   39984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:57:26.446690   39984 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:57:26.381567768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:57:26.468611   39984 out.go:177] * Using the docker driver based on existing profile
	I0629 11:57:26.489667   39984 start.go:284] selected driver: docker
	I0629 11:57:26.489698   39984 start.go:808] validating driver "docker" against &{Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:26.489832   39984 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:57:26.493277   39984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:57:26.615477   39984 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:57:26.552906823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:57:26.615651   39984 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:57:26.615666   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:26.615676   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:26.615683   39984 start_flags.go:310] config:
	{Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:26.659812   39984 out.go:177] * Starting control plane node embed-certs-20220629115611-24356 in cluster embed-certs-20220629115611-24356
	I0629 11:57:26.681749   39984 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:57:26.703472   39984 out.go:177] * Pulling base image ...
	I0629 11:57:26.745579   39984 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:57:26.745590   39984 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:57:26.745645   39984 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 11:57:26.745663   39984 cache.go:57] Caching tarball of preloaded images
	I0629 11:57:26.745789   39984 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:57:26.745807   39984 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 11:57:26.746584   39984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/config.json ...
	I0629 11:57:26.809113   39984 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:57:26.809128   39984 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:57:26.809140   39984 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:57:26.809200   39984 start.go:352] acquiring machines lock for embed-certs-20220629115611-24356: {Name:mk0bdb566e64e1b997b63c331e0b76362860de65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:57:26.809294   39984 start.go:356] acquired machines lock for "embed-certs-20220629115611-24356" in 67.417µs
	I0629 11:57:26.809317   39984 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:57:26.809326   39984 fix.go:55] fixHost starting: 
	I0629 11:57:26.809545   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 11:57:26.877064   39984 fix.go:103] recreateIfNeeded on embed-certs-20220629115611-24356: state=Stopped err=<nil>
	W0629 11:57:26.877097   39984 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:57:26.921097   39984 out.go:177] * Restarting existing docker container for "embed-certs-20220629115611-24356" ...
	I0629 11:57:26.943046   39984 cli_runner.go:164] Run: docker start embed-certs-20220629115611-24356
	I0629 11:57:27.298057   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 11:57:27.370883   39984 kic.go:416] container "embed-certs-20220629115611-24356" state is running.
	I0629 11:57:27.371467   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:27.450035   39984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/config.json ...
	I0629 11:57:27.450491   39984 machine.go:88] provisioning docker machine ...
	I0629 11:57:27.450523   39984 ubuntu.go:169] provisioning hostname "embed-certs-20220629115611-24356"
	I0629 11:57:27.450615   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:27.526657   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:27.526849   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:27.526862   39984 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220629115611-24356 && echo "embed-certs-20220629115611-24356" | sudo tee /etc/hostname
	I0629 11:57:27.655714   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220629115611-24356
	
	I0629 11:57:27.655798   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:27.730765   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:27.730938   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:27.730953   39984 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220629115611-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220629115611-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220629115611-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:57:27.848950   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:57:27.848968   39984 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:57:27.848989   39984 ubuntu.go:177] setting up certificates
	I0629 11:57:27.848996   39984 provision.go:83] configureAuth start
	I0629 11:57:27.849084   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:27.929959   39984 provision.go:138] copyHostCerts
	I0629 11:57:27.930123   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:57:27.930147   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:57:27.930263   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:57:27.930508   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:57:27.930517   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:57:27.930576   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:57:27.930756   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:57:27.930764   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:57:27.930836   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:57:27.930964   39984 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220629115611-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220629115611-24356]
	I0629 11:57:27.999428   39984 provision.go:172] copyRemoteCerts
	I0629 11:57:27.999495   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:57:27.999547   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.073332   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:28.161829   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:57:28.180214   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0629 11:57:28.196728   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 11:57:28.213826   39984 provision.go:86] duration metric: configureAuth took 364.804405ms
	I0629 11:57:28.213840   39984 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:57:28.214049   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:57:28.214114   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.285550   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.285697   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.285709   39984 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:57:28.404316   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:57:28.404329   39984 ubuntu.go:71] root file system type: overlay
	I0629 11:57:28.404488   39984 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:57:28.404565   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.475355   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.475494   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.475543   39984 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:57:28.601145   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:57:28.601241   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.672126   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.672296   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.672310   39984 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:57:28.795931   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:57:28.795946   39984 machine.go:91] provisioned docker machine in 1.345405346s
	I0629 11:57:28.795961   39984 start.go:306] post-start starting for "embed-certs-20220629115611-24356" (driver="docker")
	I0629 11:57:28.795968   39984 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:57:28.796037   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:57:28.796087   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.866293   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:28.951759   39984 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:57:28.955285   39984 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:57:28.955300   39984 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:57:28.955307   39984 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:57:28.955312   39984 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:57:28.955321   39984 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:57:28.955430   39984 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:57:28.955566   39984 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:57:28.955718   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:57:28.962930   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:57:28.979721   39984 start.go:309] post-start completed in 183.73758ms
	I0629 11:57:28.979798   39984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:57:28.979853   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.052656   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.137653   39984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:57:29.142085   39984 fix.go:57] fixHost completed within 2.332689804s
	I0629 11:57:29.142096   39984 start.go:81] releasing machines lock for "embed-certs-20220629115611-24356", held for 2.332724366s
	I0629 11:57:29.142164   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:29.211897   39984 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:57:29.211897   39984 ssh_runner.go:195] Run: systemctl --version
	I0629 11:57:29.211957   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.211969   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.288098   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.290800   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.373189   39984 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:57:29.857399   39984 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:57:29.857467   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:57:29.869954   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:57:29.883131   39984 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:57:29.955029   39984 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:57:30.019548   39984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:57:30.090812   39984 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:57:30.329132   39984 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 11:57:30.399299   39984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:57:30.472742   39984 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 11:57:30.482620   39984 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 11:57:30.482690   39984 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 11:57:30.486666   39984 start.go:468] Will wait 60s for crictl version
	I0629 11:57:30.486722   39984 ssh_runner.go:195] Run: sudo crictl version
	I0629 11:57:30.587073   39984 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 11:57:30.587149   39984 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:57:30.622161   39984 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:57:30.700040   39984 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 11:57:30.700166   39984 cli_runner.go:164] Run: docker exec -t embed-certs-20220629115611-24356 dig +short host.docker.internal
	I0629 11:57:30.827612   39984 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:57:30.827718   39984 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:57:30.831832   39984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:57:30.841288   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:30.913390   39984 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:57:30.913460   39984 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:57:30.944383   39984 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 11:57:30.944399   39984 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:57:30.944478   39984 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:57:30.975315   39984 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 11:57:30.975343   39984 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:57:30.975415   39984 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:57:31.045851   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:31.050165   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:31.050195   39984 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:57:31.050222   39984 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220629115611-24356 NodeName:embed-certs-20220629115611-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:57:31.050404   39984 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220629115611-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:57:31.050551   39984 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220629115611-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:57:31.050644   39984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 11:57:31.059402   39984 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:57:31.059454   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:57:31.066631   39984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0629 11:57:31.079513   39984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:57:31.092419   39984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0629 11:57:31.105233   39984 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:57:31.108958   39984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:57:31.118325   39984 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356 for IP: 192.168.67.2
	I0629 11:57:31.118436   39984 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:57:31.118497   39984 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:57:31.118573   39984 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/client.key
	I0629 11:57:31.118636   39984 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.key.c7fa3a9e
	I0629 11:57:31.118686   39984 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.key
	I0629 11:57:31.118892   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:57:31.118931   39984 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:57:31.118944   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:57:31.118978   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:57:31.119010   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:57:31.119037   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:57:31.119098   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:57:31.119668   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:57:31.136862   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 11:57:31.153564   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:57:31.170777   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:57:31.187816   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:57:31.204573   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:57:31.221464   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:57:31.239026   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:57:31.255730   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:57:31.272688   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:57:31.289538   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:57:31.306720   39984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:57:31.319465   39984 ssh_runner.go:195] Run: openssl version
	I0629 11:57:31.324535   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:57:31.332540   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.336652   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.336698   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.342301   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:57:31.349622   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:57:31.357282   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.361696   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.361747   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.366990   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:57:31.374502   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:57:31.382218   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.385803   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.385848   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.390826   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:57:31.397764   39984 kubeadm.go:395] StartCluster: {Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:31.397873   39984 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:57:31.427173   39984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:57:31.434832   39984 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:57:31.434846   39984 kubeadm.go:626] restartCluster start
	I0629 11:57:31.434897   39984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:57:31.441586   39984 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.441651   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:31.513483   39984 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220629115611-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:57:31.513643   39984 kubeconfig.go:127] "embed-certs-20220629115611-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 11:57:31.513999   39984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:57:31.515316   39984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:57:31.530420   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.530480   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.538594   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.738692   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.738802   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.747924   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.940764   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.940962   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.953388   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.138925   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.139021   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.150641   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.339007   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.339144   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.350071   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.538785   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.538883   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.549429   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.740773   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.740914   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.751283   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.940779   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.940965   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.952319   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.139151   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.139215   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.149931   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.338763   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.338882   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.347730   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.540825   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.540989   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.551698   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.739521   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.739687   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.750188   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.939155   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.939254   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.949817   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.140162   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.140353   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.150863   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.340139   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.340257   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.351094   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.540169   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.540353   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.551334   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.551344   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.551403   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.559886   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.559897   39984 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 11:57:34.559905   39984 kubeadm.go:1092] stopping kube-system containers ...
	I0629 11:57:34.559958   39984 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:57:34.590002   39984 docker.go:434] Stopping containers: [666dcbf78fe0 ddb4a3ba17a8 6b729b461ef0 b814135cd0a1 e13a428052eb 0dd4b988196b fae1c540c6c3 4d48afea68d9 196dbfd07a20 439d99c75b27 cc212149d36c 984a7e540bed 80e09584f648 9db02521aa04 3369302f8f17 d66a49ab53be]
	I0629 11:57:34.590078   39984 ssh_runner.go:195] Run: docker stop 666dcbf78fe0 ddb4a3ba17a8 6b729b461ef0 b814135cd0a1 e13a428052eb 0dd4b988196b fae1c540c6c3 4d48afea68d9 196dbfd07a20 439d99c75b27 cc212149d36c 984a7e540bed 80e09584f648 9db02521aa04 3369302f8f17 d66a49ab53be
	I0629 11:57:34.622333   39984 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 11:57:34.633894   39984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:57:34.642013   39984 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 18:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 29 18:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jun 29 18:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 29 18:56 /etc/kubernetes/scheduler.conf
	
	I0629 11:57:34.642067   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 11:57:34.650335   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 11:57:34.658274   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 11:57:34.666006   39984 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.666067   39984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 11:57:34.674854   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 11:57:34.682511   39984 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.682565   39984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 11:57:34.689948   39984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:57:34.697944   39984 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 11:57:34.697960   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:34.743910   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.702128   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.884195   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.931141   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.978909   39984 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:57:35.978974   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:36.489509   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:36.991297   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:37.491468   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:37.539412   39984 api_server.go:71] duration metric: took 1.560450953s to wait for apiserver process to appear ...
	I0629 11:57:37.539430   39984 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:57:37.539444   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:40.290730   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 11:57:40.290748   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:57:40.792942   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:40.800561   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:57:40.800574   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:57:41.291032   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:41.296338   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:57:41.296358   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:57:41.791011   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:41.797671   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 200:
	ok
	I0629 11:57:41.804473   39984 api_server.go:140] control plane version: v1.24.2
	I0629 11:57:41.804485   39984 api_server.go:130] duration metric: took 4.264923117s to wait for apiserver health ...
	I0629 11:57:41.804492   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:41.804502   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:41.804513   39984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:57:41.832519   39984 system_pods.go:59] 8 kube-system pods found
	I0629 11:57:41.832535   39984 system_pods.go:61] "coredns-6d4b75cb6d-pnzfc" [d1c86d77-1548-4a2f-b9c7-42b4bf4a6a3d] Running
	I0629 11:57:41.832541   39984 system_pods.go:61] "etcd-embed-certs-20220629115611-24356" [d91824a5-2512-44b7-82ef-0fa1347aaabf] Running
	I0629 11:57:41.832547   39984 system_pods.go:61] "kube-apiserver-embed-certs-20220629115611-24356" [da634837-5c4e-4f9f-9a67-2cc008c0440b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 11:57:41.832553   39984 system_pods.go:61] "kube-controller-manager-embed-certs-20220629115611-24356" [52be6bd2-1731-4717-bc8a-e66fd7626c22] Running
	I0629 11:57:41.832556   39984 system_pods.go:61] "kube-proxy-pcxgq" [27e07fcd-c6b6-438e-a098-a226b21b33e1] Running
	I0629 11:57:41.832561   39984 system_pods.go:61] "kube-scheduler-embed-certs-20220629115611-24356" [09df9d02-46aa-44bc-afe4-b16bcd31afd0] Running
	I0629 11:57:41.832566   39984 system_pods.go:61] "metrics-server-5c6f97fb75-rxdvx" [f03ad7f1-c31c-4563-a988-6b36ea877e9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 11:57:41.832573   39984 system_pods.go:61] "storage-provisioner" [941d4d53-8827-455c-bf13-eccd87cfbfe5] Running
	I0629 11:57:41.832577   39984 system_pods.go:74] duration metric: took 28.058937ms to wait for pod list to return data ...
	I0629 11:57:41.832583   39984 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:57:41.835565   39984 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:57:41.835583   39984 node_conditions.go:123] node cpu capacity is 6
	I0629 11:57:41.835591   39984 node_conditions.go:105] duration metric: took 3.005124ms to run NodePressure ...
	I0629 11:57:41.835602   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:42.037431   39984 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 11:57:42.043980   39984 kubeadm.go:777] kubelet initialised
	I0629 11:57:42.043992   39984 kubeadm.go:778] duration metric: took 6.540999ms waiting for restarted kubelet to initialise ...
	I0629 11:57:42.044000   39984 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:57:42.050820   39984 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.056213   39984 pod_ready.go:92] pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:42.056222   39984 pod_ready.go:81] duration metric: took 5.36795ms waiting for pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.056229   39984 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.061951   39984 pod_ready.go:92] pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:42.061961   39984 pod_ready.go:81] duration metric: took 5.728041ms waiting for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.061968   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:44.073865   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:46.077904   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:48.576009   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:51.075775   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:53.075358   39984 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.075371   39984 pod_ready.go:81] duration metric: took 11.01306776s waiting for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.075377   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.079816   39984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.079824   39984 pod_ready.go:81] duration metric: took 4.442048ms waiting for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.079829   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pcxgq" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.084576   39984 pod_ready.go:92] pod "kube-proxy-pcxgq" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.084583   39984 pod_ready.go:81] duration metric: took 4.749511ms waiting for pod "kube-proxy-pcxgq" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.084589   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.088625   39984 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.088632   39984 pod_ready.go:81] duration metric: took 4.039623ms waiting for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.088640   39984 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:55.097461   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:57.100786   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:59.601286   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:02.099451   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:04.600718   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:07.101221   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:09.600874   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:12.099278   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:14.601619   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:17.101075   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:19.102702   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:21.600733   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:24.099200   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:26.102268   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:28.599567   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:30.599655   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:32.599970   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:35.101359   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:37.600978   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:40.101820   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:42.601212   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:45.099127   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:47.100293   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:49.101795   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:51.600853   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:54.099798   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:56.102348   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:58.599972   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:00.602127   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:03.099999   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:05.602102   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	W0629 11:59:09.269281   39321 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:59:09.269312   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:59:09.691823   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:59:09.701755   39321 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:59:09.701805   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:59:09.709759   39321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:59:09.709777   39321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:59:10.453324   39321 out.go:204]   - Generating certificates and keys ...
	I0629 11:59:08.103868   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:10.600504   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:13.100908   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:15.103349   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:11.075112   39321 out.go:204]   - Booting up control plane ...
	I0629 11:59:17.600597   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:19.602441   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:22.101027   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:24.601921   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:27.102740   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:29.103218   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:31.602024   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:33.603482   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:36.104291   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:38.601027   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:40.602533   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:42.604039   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:45.105214   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:47.603677   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:49.606151   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:52.104004   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:54.106224   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:56.605130   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:58.606838   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:01.105420   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:03.107040   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:05.605975   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:07.607176   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:09.607415   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:12.108174   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:14.607016   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:16.608058   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:18.608278   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:21.108388   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:23.110530   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:25.609089   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:27.610444   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:30.108624   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:32.109598   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:34.613349   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:37.108006   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:39.109710   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:41.608341   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:43.610410   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:46.106908   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:48.108652   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:50.608608   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:52.609008   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:55.109271   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:57.610864   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:00.109777   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:02.109951   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:04.110413   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:06.018998   39321 kubeadm.go:397] StartCluster complete in 7m59.760603139s
	I0629 12:01:06.019078   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 12:01:06.047361   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.083489   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 12:01:06.083580   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 12:01:06.118045   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.118058   39321 logs.go:276] No container was found matching "etcd"
	I0629 12:01:06.118119   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 12:01:06.148512   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.148524   39321 logs.go:276] No container was found matching "coredns"
	I0629 12:01:06.148587   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 12:01:06.177707   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.177719   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 12:01:06.177776   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 12:01:06.210822   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.210835   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 12:01:06.210895   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 12:01:06.243800   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.243812   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 12:01:06.243868   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 12:01:06.274291   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.274305   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 12:01:06.274368   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 12:01:06.308104   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.308119   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 12:01:06.308126   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 12:01:06.308133   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 12:01:06.347949   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 12:01:06.347968   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 12:01:06.361249   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 12:01:06.361264   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 12:01:06.413780   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 12:01:06.413793   39321 logs.go:123] Gathering logs for Docker ...
	I0629 12:01:06.413800   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 12:01:06.427622   39321 logs.go:123] Gathering logs for container status ...
	I0629 12:01:06.427633   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 12:01:08.487011   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059302402s)
	W0629 12:01:08.487125   39321 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 12:01:08.487150   39321 out.go:239] * 
	W0629 12:01:08.487259   39321 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.487274   39321 out.go:239] * 
	W0629 12:01:08.487946   39321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 12:01:08.550616   39321 out.go:177] 
	W0629 12:01:08.592802   39321 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.592939   39321 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 12:01:08.593004   39321 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 12:01:08.634371   39321 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:53:02 UTC, end at Wed 2022-06-29 19:01:10 UTC. --
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Stopping Docker Application Container Engine...
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.216575736Z" level=info msg="Processing signal 'terminated'"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.217825930Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.218386582Z" level=info msg="Daemon shutdown complete"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: docker.service: Succeeded.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Stopped Docker Application Container Engine.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Starting Docker Application Container Engine...
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.272004427Z" level=info msg="Starting up"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273752497Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273789659Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273812919Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273823680Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.274963883Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275024151Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275067758Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275110265Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.278499483Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.281321453Z" level=info msg="Loading containers: start."
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.354206270Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.383916961Z" level=info msg="Loading containers: done."
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.391706828Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.391760406Z" level=info msg="Daemon has completed initialization"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Started Docker Application Container Engine.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.417864571Z" level=info msg="API listen on [::]:2376"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.420446680Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-06-29T19:01:12Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:01:12 up  1:09,  0 users,  load average: 0.37, 1.47, 1.44
	Linux old-k8s-version-20220629114717-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:53:02 UTC, end at Wed 2022-06-29 19:01:12 UTC. --
	Jun 29 19:01:10 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 kubelet[14411]: I0629 19:01:11.565952   14411 server.go:410] Version: v1.16.0
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 kubelet[14411]: I0629 19:01:11.566273   14411 plugins.go:100] No cloud provider specified.
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 kubelet[14411]: I0629 19:01:11.566317   14411 server.go:773] Client rotation is on, will bootstrap in background
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 kubelet[14411]: I0629 19:01:11.568006   14411 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 kubelet[14411]: W0629 19:01:11.568679   14411 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 kubelet[14411]: W0629 19:01:11.568741   14411 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 kubelet[14411]: F0629 19:01:11.568790   14411 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 29 19:01:11 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 kubelet[14422]: I0629 19:01:12.320511   14422 server.go:410] Version: v1.16.0
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 kubelet[14422]: I0629 19:01:12.320944   14422 plugins.go:100] No cloud provider specified.
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 kubelet[14422]: I0629 19:01:12.320986   14422 server.go:773] Client rotation is on, will bootstrap in background
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 kubelet[14422]: I0629 19:01:12.322804   14422 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 kubelet[14422]: W0629 19:01:12.325021   14422 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 kubelet[14422]: W0629 19:01:12.325120   14422 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 kubelet[14422]: F0629 19:01:12.325193   14422 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 29 19:01:12 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 12:01:12.562423   40277 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (451.919859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220629114717-24356" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (492.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (44.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220629114832-24356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
E0629 11:55:27.297843   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:55:36.546676   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356: exit status 2 (16.107644741s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
E0629 11:55:47.014906   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:55:50.916554   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356: exit status 2 (16.113505876s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220629114832-24356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
E0629 11:55:58.621787   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220629114832-24356
helpers_test.go:235: (dbg) docker inspect no-preload-20220629114832-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab",
	        "Created": "2022-06-29T18:48:34.666212575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:49:58.692676896Z",
	            "FinishedAt": "2022-06-29T18:49:56.792943722Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/hostname",
	        "HostsPath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/hosts",
	        "LogPath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab-json.log",
	        "Name": "/no-preload-20220629114832-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220629114832-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220629114832-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220629114832-24356",
	                "Source": "/var/lib/docker/volumes/no-preload-20220629114832-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220629114832-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220629114832-24356",
	                "name.minikube.sigs.k8s.io": "no-preload-20220629114832-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf5fd47197df49ad1e61e112021a02331bbbb2328e17ef80b5702122456d7d14",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60187"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60183"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cf5fd47197df",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220629114832-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "24a08bf9f03f",
	                        "no-preload-20220629114832-24356"
	                    ],
	                    "NetworkID": "280f12b17d38629a814fb7e64f456c21f5f6c8f0999ecd49f03be81ee0dfd3ee",
	                    "EndpointID": "c28bcd59329738d9d282cd041acbc33e3012203d89288366b496af7623c901f5",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220629114832-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220629114832-24356 logs -n 25: (2.651033773s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-20220629112951-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:45 PDT |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p false-20220629112951-24356                     | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:45 PDT |
	| start   | -p bridge-20220629112950-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:46 PDT |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |          |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	| delete  | -p calico-20220629112951-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:45 PDT |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:46 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --enable-default-cni=true                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	| ssh     | -p bridge-20220629112950-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:46 PDT |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p bridge-20220629112950-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:46 PDT |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:46 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:48 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:54 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:51 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:52 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 11:53:01
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 11:53:01.020541   39321 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:53:01.020674   39321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:53:01.020678   39321 out.go:309] Setting ErrFile to fd 2...
	I0629 11:53:01.020682   39321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:53:01.021047   39321 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:53:01.021305   39321 out.go:303] Setting JSON to false
	I0629 11:53:01.036590   39321 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10349,"bootTime":1656518432,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:53:01.036679   39321 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:53:01.057889   39321 out.go:177] * [old-k8s-version-20220629114717-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:53:01.100418   39321 notify.go:193] Checking for updates...
	I0629 11:53:01.121817   39321 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:53:01.142983   39321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:53:01.164005   39321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:53:01.185015   39321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:53:01.206165   39321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:53:01.228648   39321 config.go:178] Loaded profile config "old-k8s-version-20220629114717-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:53:01.251012   39321 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	I0629 11:53:01.271945   39321 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:53:01.341174   39321 docker.go:137] docker version: linux-20.10.16
	I0629 11:53:01.341305   39321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:53:01.464360   39321 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:53:01.403963306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:53:01.486719   39321 out.go:177] * Using the docker driver based on existing profile
	I0629 11:53:01.529615   39321 start.go:284] selected driver: docker
	I0629 11:53:01.529644   39321 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:01.529795   39321 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:53:01.533103   39321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:53:01.655473   39321 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:53:01.595697353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:53:01.655650   39321 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:53:01.655668   39321 cni.go:95] Creating CNI manager for ""
	I0629 11:53:01.655678   39321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:53:01.655687   39321 start_flags.go:310] config:
	{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:01.677730   39321 out.go:177] * Starting control plane node old-k8s-version-20220629114717-24356 in cluster old-k8s-version-20220629114717-24356
	I0629 11:53:01.699300   39321 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:53:01.720322   39321 out.go:177] * Pulling base image ...
	I0629 11:53:01.762354   39321 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:53:01.762361   39321 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:53:01.762438   39321 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 11:53:01.762454   39321 cache.go:57] Caching tarball of preloaded images
	I0629 11:53:01.762660   39321 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:53:01.762692   39321 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0629 11:53:01.763793   39321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:53:01.827401   39321 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:53:01.827423   39321 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:53:01.827436   39321 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:53:01.827507   39321 start.go:352] acquiring machines lock for old-k8s-version-20220629114717-24356: {Name:mkeaf278b11a6771761242ef819919656a0edee3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:53:01.827595   39321 start.go:356] acquired machines lock for "old-k8s-version-20220629114717-24356" in 67.458µs
	I0629 11:53:01.827616   39321 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:53:01.827625   39321 fix.go:55] fixHost starting: 
	I0629 11:53:01.827860   39321 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:53:01.894263   39321 fix.go:103] recreateIfNeeded on old-k8s-version-20220629114717-24356: state=Stopped err=<nil>
	W0629 11:53:01.894295   39321 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:53:01.937823   39321 out.go:177] * Restarting existing docker container for "old-k8s-version-20220629114717-24356" ...
	I0629 11:52:57.932423   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:52:59.933063   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:02.433957   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:01.958803   39321 cli_runner.go:164] Run: docker start old-k8s-version-20220629114717-24356
	I0629 11:53:02.302625   39321 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:53:02.379116   39321 kic.go:416] container "old-k8s-version-20220629114717-24356" state is running.
	I0629 11:53:02.379733   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:02.458199   39321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:53:02.458585   39321 machine.go:88] provisioning docker machine ...
	I0629 11:53:02.458625   39321 ubuntu.go:169] provisioning hostname "old-k8s-version-20220629114717-24356"
	I0629 11:53:02.458691   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:02.536976   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:02.537219   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:02.537234   39321 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220629114717-24356 && echo "old-k8s-version-20220629114717-24356" | sudo tee /etc/hostname
	I0629 11:53:02.664885   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220629114717-24356
	
	I0629 11:53:02.664959   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:02.738843   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:02.739033   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:02.739051   39321 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220629114717-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220629114717-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220629114717-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:53:02.858236   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:53:02.858255   39321 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:53:02.858272   39321 ubuntu.go:177] setting up certificates
	I0629 11:53:02.858281   39321 provision.go:83] configureAuth start
	I0629 11:53:02.858345   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:02.929876   39321 provision.go:138] copyHostCerts
	I0629 11:53:02.929998   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:53:02.930014   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:53:02.930137   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:53:02.930410   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:53:02.930419   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:53:02.930485   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:53:02.930681   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:53:02.930688   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:53:02.930750   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:53:02.930868   39321 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220629114717-24356 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220629114717-24356]
	I0629 11:53:03.099477   39321 provision.go:172] copyRemoteCerts
	I0629 11:53:03.099537   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:53:03.099583   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.171561   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:03.259681   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:53:03.277353   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0629 11:53:03.294474   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 11:53:03.311679   39321 provision.go:86] duration metric: configureAuth took 453.364787ms
	I0629 11:53:03.311691   39321 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:53:03.311820   39321 config.go:178] Loaded profile config "old-k8s-version-20220629114717-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:53:03.311873   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.383560   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.383791   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.383829   39321 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:53:03.505174   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:53:03.505190   39321 ubuntu.go:71] root file system type: overlay
	I0629 11:53:03.505337   39321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:53:03.505412   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.576780   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.576940   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.576993   39321 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:53:03.702032   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:53:03.702109   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.773428   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.773587   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.773602   39321 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:53:03.895380   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:53:03.895393   39321 machine.go:91] provisioned docker machine in 1.436757152s
	I0629 11:53:03.895403   39321 start.go:306] post-start starting for "old-k8s-version-20220629114717-24356" (driver="docker")
	I0629 11:53:03.895408   39321 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:53:03.895461   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:53:03.895508   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.971006   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.056695   39321 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:53:04.060270   39321 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:53:04.060284   39321 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:53:04.060291   39321 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:53:04.060295   39321 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:53:04.060306   39321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:53:04.060434   39321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:53:04.060599   39321 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:53:04.060774   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:53:04.067711   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:53:04.085232   39321 start.go:309] post-start completed in 189.815092ms
	I0629 11:53:04.085301   39321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:53:04.085359   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.156347   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.238000   39321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:53:04.242481   39321 fix.go:57] fixHost completed within 2.414782183s
	I0629 11:53:04.242492   39321 start.go:81] releasing machines lock for "old-k8s-version-20220629114717-24356", held for 2.414817597s
	I0629 11:53:04.242573   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:04.313552   39321 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:53:04.313558   39321 ssh_runner.go:195] Run: systemctl --version
	I0629 11:53:04.313633   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.313644   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.389089   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.391746   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.950787   39321 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:53:04.961037   39321 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:53:04.961098   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:53:04.972557   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:53:04.985220   39321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:53:05.057913   39321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:53:05.127457   39321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:53:05.201096   39321 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:53:05.403377   39321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:53:05.442119   39321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:53:05.520315   39321 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0629 11:53:05.520496   39321 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220629114717-24356 dig +short host.docker.internal
	I0629 11:53:05.646740   39321 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:53:05.646853   39321 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:53:05.651058   39321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:53:05.662556   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:05.733785   39321 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:53:05.733877   39321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:53:05.763532   39321 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:53:05.763547   39321 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:53:05.763613   39321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:53:05.793235   39321 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:53:05.793253   39321 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:53:05.793340   39321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:53:05.867180   39321 cni.go:95] Creating CNI manager for ""
	I0629 11:53:05.867191   39321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:53:05.867206   39321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:53:05.867219   39321 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220629114717-24356 NodeName:old-k8s-version-20220629114717-24356 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:53:05.867334   39321 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220629114717-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220629114717-24356
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:53:05.867405   39321 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220629114717-24356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:53:05.867467   39321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0629 11:53:05.874886   39321 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:53:05.874948   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:53:05.881929   39321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0629 11:53:05.894526   39321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:53:05.906971   39321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0629 11:53:05.919357   39321 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:53:05.923010   39321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:53:05.934256   39321 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356 for IP: 192.168.76.2
	I0629 11:53:05.934374   39321 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:53:05.934432   39321 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:53:05.934518   39321 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.key
	I0629 11:53:05.934586   39321 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key.31bdca25
	I0629 11:53:05.934644   39321 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key
	I0629 11:53:05.934860   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:53:05.934902   39321 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:53:05.934916   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:53:05.934951   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:53:05.934990   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:53:05.935032   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:53:05.935095   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:53:05.935616   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:53:05.952783   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 11:53:05.969962   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:53:05.986903   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:53:06.004120   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:53:04.931647   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:06.931781   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:06.022586   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:53:06.059761   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:53:06.076874   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:53:06.093750   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:53:06.110970   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:53:06.128088   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:53:06.146358   39321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:53:06.159473   39321 ssh_runner.go:195] Run: openssl version
	I0629 11:53:06.164773   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:53:06.172822   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.176828   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.176875   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.182239   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:53:06.189362   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:53:06.197559   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.201505   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.201555   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.207119   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:53:06.214849   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:53:06.222597   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.226582   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.226621   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.231864   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:53:06.239364   39321 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:06.239478   39321 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:53:06.268678   39321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:53:06.276184   39321 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:53:06.276201   39321 kubeadm.go:626] restartCluster start
	I0629 11:53:06.276249   39321 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:53:06.282969   39321 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.283027   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:06.354486   39321 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220629114717-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:53:06.354648   39321 kubeconfig.go:127] "old-k8s-version-20220629114717-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 11:53:06.354967   39321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:53:06.356063   39321 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:53:06.363888   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.363980   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.372296   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.572897   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.573039   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.583383   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.773156   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.773259   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.783501   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.972425   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.972514   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.981322   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.173227   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.173323   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.183915   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.373230   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.373327   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.383900   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.573955   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.574107   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.584389   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.774471   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.774706   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.784989   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.972462   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.972554   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.982777   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.172517   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.172614   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.183424   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.372918   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.373101   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.383561   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.572500   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.572573   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.582518   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.772633   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.772771   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.783206   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.972740   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.972875   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.983311   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.172733   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.172846   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.183530   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.372639   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.372862   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.383814   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.383824   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.383870   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.392053   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.392064   39321 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 11:53:09.392072   39321 kubeadm.go:1092] stopping kube-system containers ...
	I0629 11:53:09.392131   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:53:09.420212   39321 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 11:53:09.433676   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:53:09.441303   39321 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun 29 18:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jun 29 18:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jun 29 18:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jun 29 18:49 /etc/kubernetes/scheduler.conf
	
	I0629 11:53:09.441356   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 11:53:09.448705   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 11:53:09.455863   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 11:53:09.463598   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 11:53:09.470944   39321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:53:09.479430   39321 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 11:53:09.479451   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:09.530261   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.632194   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.101882408s)
	I0629 11:53:10.632212   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.847331   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.904889   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.963035   39321 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:53:10.963098   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:08.931920   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:11.430843   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:11.471629   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:11.971653   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:12.471604   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:12.973656   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.471720   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.971792   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:14.473862   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:14.972657   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:15.472511   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:15.973033   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.432531   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:15.934415   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:16.472375   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:16.972679   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:17.471980   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:17.972744   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.472610   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.972373   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:19.471947   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:19.972438   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:20.472581   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:20.972723   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.432311   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:20.432454   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:21.473577   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:21.972016   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.472026   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.973315   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:23.471896   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:23.972447   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:24.471973   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:24.973386   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:25.473637   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:25.972648   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.932135   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:24.933190   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:27.432928   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:26.472198   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:26.972657   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:27.472346   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:27.972638   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:28.473151   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:28.972205   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.472234   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.972717   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:30.472697   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:30.972995   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.433003   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:31.433480   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:31.472433   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:31.972406   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:32.472190   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:32.974199   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.472460   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.972993   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:34.472909   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:34.972289   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:35.473152   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:35.972577   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.433642   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:35.932766   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:36.474436   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:36.973628   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.472308   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.973415   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:38.472767   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:38.974410   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:39.473141   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:39.972605   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:40.472482   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:40.972864   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.933277   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:40.432620   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:42.433936   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:41.472723   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:41.974616   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:42.472627   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:42.972675   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:43.472686   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:43.973714   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.473536   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.973783   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:45.472730   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:45.972999   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.434699   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:46.933086   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:46.473581   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:46.973015   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:47.472857   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:47.972929   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:48.474126   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:48.972902   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.472981   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.972804   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:50.473092   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:50.973396   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.434292   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:51.434828   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:51.473121   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:51.973014   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:52.473008   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:52.973431   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.472906   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.973182   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:54.473436   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:54.974299   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:55.473284   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:55.973150   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.932198   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:56.434724   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:56.474409   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:56.973527   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:57.472991   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:57.972998   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.473348   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.973142   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:59.473282   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:59.973927   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:00.473094   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:00.974069   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.935361   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:01.434028   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:01.474438   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:01.973191   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:02.473214   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:02.973108   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.475258   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.974208   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:04.473408   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:04.975325   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:05.473242   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:05.974115   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.933474   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:05.935169   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:06.474575   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:06.973453   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:07.473535   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:07.973316   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:08.473278   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:08.974032   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:09.473400   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:09.973400   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:10.473858   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:10.973493   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:11.005027   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.005047   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:11.005174   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:08.434385   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:10.435932   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:11.034514   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.044684   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:11.044771   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:11.074864   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.074876   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:11.074948   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:11.107049   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.107060   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:11.107125   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:11.136126   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.136137   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:11.136202   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:11.166106   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.166123   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:11.166197   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:11.195233   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.195244   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:11.195311   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:11.224314   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.224326   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:11.224333   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:11.224341   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:11.238284   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:11.238295   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:13.292784   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054415695s)
	I0629 11:54:13.292934   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:13.292941   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:13.333282   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:13.333295   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:13.345303   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:13.345316   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:13.397489   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:15.899245   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:15.973676   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:16.003497   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.003509   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:16.003567   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:12.934751   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:15.435329   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:16.033526   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.044819   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:16.044901   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:16.076936   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.076948   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:16.077013   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:16.107083   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.107095   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:16.107151   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:16.138323   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.138335   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:16.138389   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:16.167336   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.167348   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:16.167417   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:16.198137   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.198149   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:16.198204   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:16.227979   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.227992   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:16.227999   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:16.228012   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:16.267349   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:16.267364   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:16.279505   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:16.279520   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:16.331710   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:16.331728   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:16.331736   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:16.345394   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:16.345405   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:18.399883   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05440587s)
	I0629 11:54:20.900466   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:20.973806   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:21.004342   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.004356   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:21.004415   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:17.934521   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:20.436650   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:21.034479   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.045019   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:21.045125   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:21.075792   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.075805   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:21.075876   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:21.113638   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.113651   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:21.113708   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:21.143417   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.143429   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:21.143492   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:21.172595   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.172607   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:21.172672   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:21.201866   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.201878   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:21.201937   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:21.230654   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.230664   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:21.230671   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:21.230677   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:21.271551   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:21.271572   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:21.284291   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:21.284305   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:21.340570   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:21.340584   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:21.340593   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:21.354206   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:21.354218   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:23.410357   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056065961s)
	I0629 11:54:25.911253   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:25.974183   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:26.006527   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.006539   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:26.006593   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:22.935934   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:25.434546   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:27.928494   39013 pod_ready.go:81] duration metric: took 4m0.013477475s waiting for pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace to be "Ready" ...
	E0629 11:54:27.928518   39013 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 11:54:27.928588   39013 pod_ready.go:38] duration metric: took 4m15.068434231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:54:27.928632   39013 kubeadm.go:630] restartCluster took 4m25.017561497s
	W0629 11:54:27.928753   39013 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 11:54:27.928782   39013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0629 11:54:30.406051   39013 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.477179412s)
	I0629 11:54:30.406109   39013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:54:30.416106   39013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:54:30.423937   39013 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:54:30.423981   39013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:54:30.431422   39013 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:54:30.431447   39013 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:54:26.034855   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.045013   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:26.045108   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:26.075260   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.075272   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:26.075332   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:26.104633   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.104645   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:26.104702   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:26.134389   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.134402   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:26.134460   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:26.165666   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.165678   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:26.165744   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:26.196944   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.196959   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:26.197023   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:26.224887   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.224902   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:26.224910   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:26.224917   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:26.264545   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:26.264559   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:26.275868   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:26.275882   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:26.329330   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:26.329346   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:26.329353   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:26.343299   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:26.343311   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:28.396021   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052636665s)
	I0629 11:54:30.896828   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:30.973978   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:31.008212   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.008225   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:31.008285   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:30.710947   39013 out.go:204]   - Generating certificates and keys ...
	I0629 11:54:31.365688   39013 out.go:204]   - Booting up control plane ...
	I0629 11:54:31.041367   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.045055   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:31.045123   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:31.077818   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.077830   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:31.077893   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:31.108115   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.108128   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:31.108192   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:31.138455   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.138469   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:31.138532   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:31.169314   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.169329   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:31.169389   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:31.199503   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.199515   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:31.199584   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:31.230870   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.230884   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:31.230893   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:31.230912   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:31.274860   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:31.274876   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:31.289572   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:31.289588   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:31.345087   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:31.345100   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:31.345106   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:31.362082   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:31.362095   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:33.419132   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056963483s)
	I0629 11:54:35.919752   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:35.976084   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:36.006737   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.006750   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:36.006814   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:36.036631   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.045922   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:36.045984   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:36.075280   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.075293   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:36.075359   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:36.105709   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.105720   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:36.105789   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:36.135433   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.135445   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:36.135509   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:36.164044   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.164057   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:36.164116   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:36.193256   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.193269   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:36.193331   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:36.221611   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.221623   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:36.221630   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:36.221636   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:36.261723   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:36.261740   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:36.273915   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:36.273934   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:36.332462   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:36.332479   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:36.332487   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:36.346115   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:36.346128   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:38.400565   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054363884s)
	I0629 11:54:40.901227   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:40.976044   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:41.005727   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.005739   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:41.005796   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:38.418030   39013 out.go:204]   - Configuring RBAC rules ...
	I0629 11:54:38.793760   39013 cni.go:95] Creating CNI manager for ""
	I0629 11:54:38.793771   39013 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:54:38.793794   39013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 11:54:38.793877   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=no-preload-20220629114832-24356 minikube.k8s.io/updated_at=2022_06_29T11_54_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:38.793879   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:38.963621   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:38.963622   39013 ops.go:34] apiserver oom_adj: -16
	I0629 11:54:39.516051   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:40.015548   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:40.516675   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:41.015806   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:41.515804   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:42.016197   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:41.036553   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.045422   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:41.045478   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:41.075203   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.075216   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:41.075276   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:41.108156   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.108168   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:41.108227   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:41.137946   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.137957   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:41.138020   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:41.167765   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.167777   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:41.167846   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:41.197634   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.197645   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:41.197700   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:41.226006   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.226019   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:41.226025   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:41.226036   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:41.278933   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:41.278945   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:41.278952   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:41.292648   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:41.292661   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:43.349789   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057054339s)
	I0629 11:54:43.349901   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:43.349908   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:43.389415   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:43.389428   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:45.901944   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:45.976279   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:46.007239   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.007251   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:46.007317   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:42.517669   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:43.015630   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:43.517707   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:44.015840   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:44.515768   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:45.016492   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:45.516201   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:46.016051   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:46.515723   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:47.017768   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:46.038729   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.045289   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:46.045348   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:46.080579   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.080656   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:46.080727   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:46.110618   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.110630   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:46.110691   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:46.139982   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.139994   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:46.140049   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:46.168606   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.168620   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:46.168685   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:46.198162   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.198175   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:46.198238   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:46.226969   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.226980   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:46.226987   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:46.226995   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:48.280086   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053017479s)
	I0629 11:54:48.280198   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:48.280208   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:48.321498   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:48.321516   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:48.333730   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:48.333746   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:48.386942   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:48.386954   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:48.386963   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:50.902020   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:50.976006   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:51.016056   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.016066   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:51.016114   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:47.516204   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:48.016295   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:48.515997   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:49.015736   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:49.517555   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:50.016719   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:50.516173   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:51.015839   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:51.516070   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:52.016151   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:52.076730   39013 kubeadm.go:1045] duration metric: took 13.282526806s to wait for elevateKubeSystemPrivileges.
	I0629 11:54:52.076746   39013 kubeadm.go:397] StartCluster complete in 4m49.203921961s
	I0629 11:54:52.076764   39013 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:54:52.076848   39013 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:54:52.077402   39013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:54:52.592513   39013 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220629114832-24356" rescaled to 1
	I0629 11:54:52.592549   39013 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:54:52.592571   39013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 11:54:52.592603   39013 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 11:54:52.592801   39013 config.go:178] Loaded profile config "no-preload-20220629114832-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:54:52.613503   39013 out.go:177] * Verifying Kubernetes components...
	I0629 11:54:52.613574   39013 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.613575   39013 addons.go:65] Setting dashboard=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.655332   39013 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220629114832-24356"
	W0629 11:54:52.655344   39013 addons.go:162] addon storage-provisioner should already be in state true
	I0629 11:54:52.655336   39013 addons.go:153] Setting addon dashboard=true in "no-preload-20220629114832-24356"
	I0629 11:54:52.655355   39013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0629 11:54:52.655364   39013 addons.go:162] addon dashboard should already be in state true
	I0629 11:54:52.613587   39013 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.655401   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.655406   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.613581   39013 addons.go:65] Setting metrics-server=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.655414   39013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220629114832-24356"
	I0629 11:54:52.655429   39013 addons.go:153] Setting addon metrics-server=true in "no-preload-20220629114832-24356"
	W0629 11:54:52.655437   39013 addons.go:162] addon metrics-server should already be in state true
	I0629 11:54:52.655470   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.655688   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.656760   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.656831   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.658957   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.772237   39013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 11:54:52.830222   39013 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0629 11:54:52.772251   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:52.788521   39013 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220629114832-24356"
	I0629 11:54:52.867959   39013 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 11:54:52.809280   39013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0629 11:54:52.868011   39013 addons.go:162] addon default-storageclass should already be in state true
	I0629 11:54:52.915294   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.915374   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 11:54:52.937447   39013 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:54:52.958108   39013 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0629 11:54:52.958107   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 11:54:52.958132   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 11:54:52.995162   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 11:54:52.958546   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.995176   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 11:54:52.995225   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:52.995235   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:52.995234   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:53.015922   39013 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220629114832-24356" to be "Ready" ...
	I0629 11:54:53.046806   39013 node_ready.go:49] node "no-preload-20220629114832-24356" has status "Ready":"True"
	I0629 11:54:53.046823   39013 node_ready.go:38] duration metric: took 30.874814ms waiting for node "no-preload-20220629114832-24356" to be "Ready" ...
	I0629 11:54:53.046832   39013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:54:53.056385   39013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-fcqdl" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:53.119675   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.120197   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.120684   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.122204   39013 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 11:54:53.122217   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 11:54:53.122275   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:53.208303   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.263548   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:54:53.270566   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 11:54:53.270596   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 11:54:53.280968   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 11:54:53.280980   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 11:54:53.361187   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 11:54:53.361202   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 11:54:53.369439   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 11:54:53.369453   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 11:54:53.446321   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 11:54:53.446336   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 11:54:53.453020   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 11:54:53.453040   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 11:54:53.467130   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 11:54:53.472216   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 11:54:53.472227   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 11:54:53.479863   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 11:54:53.553558   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 11:54:53.553575   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 11:54:53.589176   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 11:54:53.589190   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 11:54:53.667441   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 11:54:53.667461   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 11:54:53.685591   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 11:54:53.685603   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 11:54:53.746807   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 11:54:53.746822   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 11:54:53.760881   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 11:54:53.980512   39013 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.150241193s)
	I0629 11:54:53.980529   39013 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 11:54:54.250331   39013 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220629114832-24356"
	I0629 11:54:54.547583   39013 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0629 11:54:51.048022   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.048034   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:51.048093   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:51.081074   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.081085   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:51.081143   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:51.112957   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.112968   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:51.113030   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:51.145997   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.146009   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:51.146068   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:51.176395   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.176407   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:51.176469   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:51.208630   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.208645   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:51.208708   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:51.239987   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.240003   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:51.240012   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:51.240021   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:51.287920   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:51.287939   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:51.302964   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:51.302985   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:51.362169   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:51.362179   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:51.362186   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:51.376235   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:51.376248   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:53.427692   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051370993s)
	I0629 11:54:55.928476   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:55.976666   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:56.005708   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.005720   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:56.005780   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:54.568638   39013 addons.go:414] enableAddons completed in 1.976002698s
	I0629 11:54:54.577787   39013 pod_ready.go:92] pod "coredns-6d4b75cb6d-fcqdl" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:54.577802   39013 pod_ready.go:81] duration metric: took 1.52135183s waiting for pod "coredns-6d4b75cb6d-fcqdl" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:54.577811   39013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-mkj7b" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.088829   39013 pod_ready.go:92] pod "coredns-6d4b75cb6d-mkj7b" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.088843   39013 pod_ready.go:81] duration metric: took 1.510981571s waiting for pod "coredns-6d4b75cb6d-mkj7b" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.088850   39013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.095348   39013 pod_ready.go:92] pod "etcd-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.095358   39013 pod_ready.go:81] duration metric: took 6.502967ms waiting for pod "etcd-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.095365   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.101367   39013 pod_ready.go:92] pod "kube-apiserver-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.101377   39013 pod_ready.go:81] duration metric: took 6.00742ms waiting for pod "kube-apiserver-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.101384   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.107696   39013 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.107705   39013 pod_ready.go:81] duration metric: took 6.316155ms waiting for pod "kube-controller-manager-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.107711   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7cvpr" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.219241   39013 pod_ready.go:92] pod "kube-proxy-7cvpr" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.219251   39013 pod_ready.go:81] duration metric: took 111.532331ms waiting for pod "kube-proxy-7cvpr" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.219257   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.620319   39013 pod_ready.go:92] pod "kube-scheduler-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.620332   39013 pod_ready.go:81] duration metric: took 401.057657ms waiting for pod "kube-scheduler-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.620339   39013 pod_ready.go:38] duration metric: took 3.573386669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:54:56.620353   39013 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:54:56.620418   39013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:56.630540   39013 api_server.go:71] duration metric: took 4.037851613s to wait for apiserver process to appear ...
	I0629 11:54:56.630553   39013 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:54:56.630560   39013 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60183/healthz ...
	I0629 11:54:56.635607   39013 api_server.go:266] https://127.0.0.1:60183/healthz returned 200:
	ok
	I0629 11:54:56.636666   39013 api_server.go:140] control plane version: v1.24.2
	I0629 11:54:56.636674   39013 api_server.go:130] duration metric: took 6.116861ms to wait for apiserver health ...
	I0629 11:54:56.636678   39013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:54:56.823080   39013 system_pods.go:59] 9 kube-system pods found
	I0629 11:54:56.823092   39013 system_pods.go:61] "coredns-6d4b75cb6d-fcqdl" [fbcd50cd-0663-4e51-b103-e520c8d33ce3] Running
	I0629 11:54:56.823096   39013 system_pods.go:61] "coredns-6d4b75cb6d-mkj7b" [cdff1c2d-7c51-46bb-bd66-28e55f071f74] Running
	I0629 11:54:56.823099   39013 system_pods.go:61] "etcd-no-preload-20220629114832-24356" [f20e1065-ccd6-4e6b-9f89-19a78c82d84c] Running
	I0629 11:54:56.823103   39013 system_pods.go:61] "kube-apiserver-no-preload-20220629114832-24356" [ecc08c98-b6c2-44b1-892f-6190e6bf0f52] Running
	I0629 11:54:56.823106   39013 system_pods.go:61] "kube-controller-manager-no-preload-20220629114832-24356" [9d831661-e795-486e-9acf-c95e6bfe23b9] Running
	I0629 11:54:56.823110   39013 system_pods.go:61] "kube-proxy-7cvpr" [470eaa9c-23cf-4ede-ab50-7ed59f41354a] Running
	I0629 11:54:56.823114   39013 system_pods.go:61] "kube-scheduler-no-preload-20220629114832-24356" [5909a6d8-7ca6-4042-9a76-dbd460c37ea9] Running
	I0629 11:54:56.823120   39013 system_pods.go:61] "metrics-server-5c6f97fb75-8l9bk" [2716023f-a52f-44c4-858b-ec6667a36b0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 11:54:56.823127   39013 system_pods.go:61] "storage-provisioner" [285cc482-2cd9-4283-bc5a-1ef2e61213f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 11:54:56.823132   39013 system_pods.go:74] duration metric: took 186.444677ms to wait for pod list to return data ...
	I0629 11:54:56.823137   39013 default_sa.go:34] waiting for default service account to be created ...
	I0629 11:54:57.019766   39013 default_sa.go:45] found service account: "default"
	I0629 11:54:57.019779   39013 default_sa.go:55] duration metric: took 196.631815ms for default service account to be created ...
	I0629 11:54:57.019785   39013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 11:54:57.222905   39013 system_pods.go:86] 9 kube-system pods found
	I0629 11:54:57.222918   39013 system_pods.go:89] "coredns-6d4b75cb6d-fcqdl" [fbcd50cd-0663-4e51-b103-e520c8d33ce3] Running
	I0629 11:54:57.222923   39013 system_pods.go:89] "coredns-6d4b75cb6d-mkj7b" [cdff1c2d-7c51-46bb-bd66-28e55f071f74] Running
	I0629 11:54:57.222927   39013 system_pods.go:89] "etcd-no-preload-20220629114832-24356" [f20e1065-ccd6-4e6b-9f89-19a78c82d84c] Running
	I0629 11:54:57.222930   39013 system_pods.go:89] "kube-apiserver-no-preload-20220629114832-24356" [ecc08c98-b6c2-44b1-892f-6190e6bf0f52] Running
	I0629 11:54:57.222934   39013 system_pods.go:89] "kube-controller-manager-no-preload-20220629114832-24356" [9d831661-e795-486e-9acf-c95e6bfe23b9] Running
	I0629 11:54:57.222939   39013 system_pods.go:89] "kube-proxy-7cvpr" [470eaa9c-23cf-4ede-ab50-7ed59f41354a] Running
	I0629 11:54:57.222942   39013 system_pods.go:89] "kube-scheduler-no-preload-20220629114832-24356" [5909a6d8-7ca6-4042-9a76-dbd460c37ea9] Running
	I0629 11:54:57.222948   39013 system_pods.go:89] "metrics-server-5c6f97fb75-8l9bk" [2716023f-a52f-44c4-858b-ec6667a36b0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 11:54:57.222955   39013 system_pods.go:89] "storage-provisioner" [285cc482-2cd9-4283-bc5a-1ef2e61213f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 11:54:57.222960   39013 system_pods.go:126] duration metric: took 203.164956ms to wait for k8s-apps to be running ...
	I0629 11:54:57.222966   39013 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 11:54:57.223017   39013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:54:57.232711   39013 system_svc.go:56] duration metric: took 9.738308ms WaitForService to wait for kubelet.
	I0629 11:54:57.232724   39013 kubeadm.go:572] duration metric: took 4.640018458s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 11:54:57.232738   39013 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:54:57.420496   39013 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:54:57.420509   39013 node_conditions.go:123] node cpu capacity is 6
	I0629 11:54:57.420517   39013 node_conditions.go:105] duration metric: took 187.769826ms to run NodePressure ...
	I0629 11:54:57.420539   39013 start.go:213] waiting for startup goroutines ...
	I0629 11:54:57.450003   39013 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 11:54:57.471079   39013 out.go:177] * Done! kubectl is now configured to use "no-preload-20220629114832-24356" cluster and "default" namespace by default
	I0629 11:54:56.034443   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.049359   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:56.049422   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:56.078685   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.078697   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:56.078752   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:56.119131   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.119143   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:56.119202   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:56.147731   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.147743   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:56.147801   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:56.176982   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.176994   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:56.177049   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:56.205600   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.205613   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:56.205667   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:56.234552   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.234564   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:56.234570   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:56.234576   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:56.275806   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:56.275822   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:56.288255   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:56.288270   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:56.343278   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:56.343289   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:56.343296   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:56.357151   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:56.357163   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:58.409308   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052071728s)
	I0629 11:55:00.909863   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:00.975039   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:01.009426   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.009439   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:01.009500   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:01.058626   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.058638   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:01.058715   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:01.096270   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.096285   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:01.096370   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:01.130375   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.130388   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:01.130446   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:01.167367   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.167379   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:01.167443   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:01.200318   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.200330   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:01.200390   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:01.231557   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.231570   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:01.231629   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:01.266142   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.266179   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:01.266211   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:01.266225   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:03.348388   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.082087684s)
	I0629 11:55:03.348526   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:03.348534   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:03.393758   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:03.393788   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:03.412557   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:03.412576   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:03.479793   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:03.479808   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:03.479818   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:05.995421   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:06.477124   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:06.508598   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.508609   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:06.508668   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:06.571634   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.571648   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:06.571709   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:06.603733   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.603750   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:06.603821   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:06.641504   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.641540   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:06.641612   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:06.680642   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.680654   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:06.680718   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:06.719154   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.719166   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:06.719243   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:06.752660   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.752672   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:06.752781   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:06.790338   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.790350   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:06.790357   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:06.790364   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:06.839137   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:06.839156   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:06.855958   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:06.855978   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:06.924265   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:06.924279   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:06.924285   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:06.947627   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:06.947646   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:09.012320   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064598664s)
	I0629 11:55:11.512790   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:11.975458   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:12.007895   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.007907   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:12.007963   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:12.039685   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.039696   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:12.039751   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:12.068287   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.068306   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:12.068380   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:12.097250   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.097262   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:12.097329   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:12.125908   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.125920   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:12.125974   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:12.155445   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.155457   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:12.155513   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:12.185314   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.185326   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:12.185383   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:12.214629   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.214639   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:12.214646   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:12.214653   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:12.271182   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:12.271194   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:12.271204   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:12.286914   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:12.286928   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:14.343425   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056423824s)
	I0629 11:55:14.343535   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:14.343543   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:14.383870   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:14.383883   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:16.897690   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:16.976654   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:17.012584   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.012596   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:17.012657   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:17.044046   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.044058   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:17.044124   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:17.074296   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.074308   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:17.074365   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:17.115757   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.115768   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:17.115824   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:17.145895   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.145906   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:17.145962   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:17.175767   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.175777   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:17.175843   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:17.205469   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.205480   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:17.205540   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:17.234651   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.234663   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:17.234670   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:17.234677   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:17.277938   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:17.277952   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:17.289697   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:17.289715   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:17.341609   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:17.341618   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:17.341625   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:17.355655   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:17.355667   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:19.408285   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052537682s)
	I0629 11:55:21.910724   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:21.975500   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:22.004837   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.004854   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:22.004921   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:22.035732   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.035743   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:22.035801   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:22.069625   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.069636   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:22.069692   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:22.099818   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.099832   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:22.099880   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:22.130176   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.130188   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:22.130247   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:22.162002   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.162019   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:22.162078   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:22.190365   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.190379   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:22.190442   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:22.219748   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.219761   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:22.219767   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:22.219777   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:22.273321   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:22.273337   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:22.273352   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:22.287787   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:22.287800   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:24.342535   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054658523s)
	I0629 11:55:24.342644   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:24.342651   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:24.382581   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:24.382593   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:26.895697   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:26.977747   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:27.008926   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.008938   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:27.009000   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:27.038100   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.038111   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:27.038168   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:27.067169   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.067180   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:27.067236   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:27.095625   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.095637   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:27.095694   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:27.125107   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.125118   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:27.125175   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:27.154968   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.154982   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:27.155040   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:27.183779   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.183791   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:27.183850   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:27.212801   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.212813   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:27.212820   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:27.212827   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:27.253498   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:27.253514   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:27.265985   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:27.266001   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:27.322114   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:27.322123   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:27.322130   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:27.335806   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:27.335821   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:29.392403   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056508883s)
	I0629 11:55:31.893240   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:31.977413   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:32.008956   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.008971   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:32.009028   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:32.038201   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.038212   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:32.038267   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:32.066990   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.067002   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:32.067057   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:32.097577   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.097593   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:32.097667   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:32.127554   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.127567   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:32.127629   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:32.156429   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.156443   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:32.156507   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:32.185611   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.185623   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:32.185681   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:32.214323   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.214335   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:32.214342   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:32.214348   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:32.267585   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:32.267595   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:32.267601   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:32.282076   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:32.282088   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:34.339416   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057253442s)
	I0629 11:55:34.339525   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:34.339531   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:34.379921   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:34.379933   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:36.894519   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:36.975922   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:37.010242   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.010263   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:37.010330   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:37.040881   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.040893   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:37.040949   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:37.070230   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.070242   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:37.070308   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:37.101292   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.101303   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:37.101353   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:37.131101   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.131113   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:37.131173   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:37.159540   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.159552   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:37.159610   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:37.189520   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.189532   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:37.189588   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:37.219222   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.219233   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:37.219241   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:37.219248   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:37.259017   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:37.259032   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:37.270684   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:37.270696   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:37.322386   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:37.322399   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:37.322407   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:37.335982   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:37.335995   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:39.390442   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054372053s)
	I0629 11:55:41.891223   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:41.978245   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:42.009313   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.009326   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:42.009380   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:42.039076   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.039089   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:42.039146   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:42.068464   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.068478   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:42.068534   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:42.097800   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.097811   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:42.097866   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:42.127026   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.127038   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:42.127093   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:42.156370   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.156382   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:42.156444   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:42.186834   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.186846   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:42.186901   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:42.215822   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.215835   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:42.215846   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:42.215855   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:42.230305   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:42.230319   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:44.285629   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055236751s)
	I0629 11:55:44.285764   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:44.285771   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:44.325646   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:44.325660   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:44.337146   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:44.337159   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:44.389786   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:46.891554   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:46.978341   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:47.009917   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.009929   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:47.009985   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:47.038523   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.038534   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:47.038588   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:47.067903   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.067915   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:47.067970   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:47.098087   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.098099   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:47.098155   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:47.127152   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.127164   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:47.127220   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:47.157028   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.157039   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:47.157096   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:47.186471   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.186483   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:47.186541   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:47.215975   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.215988   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:47.215997   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:47.216004   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:47.256256   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:47.256268   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:47.268708   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:47.268721   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:47.320566   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:47.320577   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:47.320583   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:47.334197   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:47.334209   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:49.391366   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057082304s)
	I0629 11:55:51.893853   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:51.976453   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:52.006330   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.006344   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:52.006418   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:52.036416   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.036428   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:52.036489   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:52.065995   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.066007   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:52.066062   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:52.095567   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.095579   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:52.095639   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:52.125457   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.125470   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:52.125526   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:52.154476   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.154488   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:52.154545   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:52.183063   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.183074   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:52.183133   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:52.212690   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.212702   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:52.212708   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:52.212715   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:52.253322   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:52.253336   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:52.264898   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:52.264911   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:52.317711   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:52.317722   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:52.317729   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:52.331473   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:52.331486   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:54.387012   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055452409s)
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:49:58 UTC, end at Wed 2022-06-29 18:56:00 UTC. --
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.143834211Z" level=info msg="ignoring event" container=b38c1e528ff3a0a45a660887944fb223a61a5c91c2319cd9bd050209c0b5a5e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.290302019Z" level=info msg="ignoring event" container=e43f60b0adfe27e62ea0beb066bcd75dd614ee889826e78859e11351d50b3e29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.375707691Z" level=info msg="ignoring event" container=4c35dcdc4471e9d9c4c5da72f67b4c8453af5e2b85b0e3d1c0bd9adfd6456606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.445750224Z" level=info msg="ignoring event" container=3a1551305a5e20f0271de3001f26fc5792d6dcd406b35d650d27bbd088ba2965 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.511314510Z" level=info msg="ignoring event" container=907fee6b0ab951dc570b507cfb53082f088d6e46e8f53b523f51806bbe7b6662 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.583997951Z" level=info msg="ignoring event" container=8d483a26327fea368267b8e3556918ffdc27582da76ac2cf4e7c30cf84ea008c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.687870390Z" level=info msg="ignoring event" container=882f6ead5f8649814f45dde882d7bababe2e9ea489a1db1d6341be2af91e0441 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.754239281Z" level=info msg="ignoring event" container=c65e7645bda76e59b22a150b3d65f3c25c956781bf0c9b2228b7a35c00d48463 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.871531414Z" level=info msg="ignoring event" container=f66e80ddbf1fcf48d493edf40bd111a16353bb1369c28fcab3769150326cef4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.938807358Z" level=info msg="ignoring event" container=61ea5e8a5dad91bee9c26f03b2a0dc70191635968e3a4632cf100a9946fbdb5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:30 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:30.029641025Z" level=info msg="ignoring event" container=b9a37d3d69e4feebcc42ec5347326b86471dfa8e5ab53141df52f19e1f6fcc3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:55 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:55.003702864Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:54:55 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:55.003726242Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:54:55 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:55.004991117Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:54:57 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:57.597544824Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 18:54:58 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:58.311396066Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 18:54:59 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:59.179377083Z" level=info msg="ignoring event" container=af49197e52b2c31302999c10d2c306b0dab30a799cb3dd46805f0a0f863d5902 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:59 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:59.224426852Z" level=info msg="ignoring event" container=923b92c51ad6061167c22b8038f54aa7e3f07db7f2bba7552158eae2d4a0672b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:55:03 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:03.628142970Z" level=info msg="ignoring event" container=a7f95f34f56ed3f6e168fe1beb439bd6ce13bec913fce2309d48a785860e2096 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:55:03 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:03.662088432Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 18:55:04 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:04.279252927Z" level=info msg="ignoring event" container=592a996e048c801c02c22a6c449b60a88f58dbfbdebe6df0acb83d9b78dc8aea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:55:09 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:09.602202019Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:55:09 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:09.602255388Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:55:09 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:09.644169103Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:55:19 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:19.882429312Z" level=info msg="ignoring event" container=45257cf5b348193f22418066d665fc1ac8158235b6195ef3672e83d44cfe947b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	45257cf5b3481       a90209bb39e3d                                                                                    41 seconds ago       Exited              dashboard-metrics-scraper   2                   e36bb117aeda0
	565d25698c926       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   ebf59b2d38d52
	18a1e2c19d2b3       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   54bf5bf72c3cd
	11e93671bb6e7       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   227e9f2b6e470
	f2624e6409795       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   afa8fe6012e83
	dcbaad6c52814       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   c5f2433985f1b
	b7d773db9f211       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   c38e44e207e54
	3d1d52e8fbacf       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   71b251c62fffb
	75538b8195286       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   04f22e297be93
	
	* 
	* ==> coredns [11e93671bb6e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220629114832-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220629114832-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=no-preload-20220629114832-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T11_54_38_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 18:54:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220629114832-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 18:55:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220629114832-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                27a72ab0-3369-43c8-aa5b-98e38866b3a6
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fcqdl                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     69s
	  kube-system                 etcd-no-preload-20220629114832-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-no-preload-20220629114832-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-no-preload-20220629114832-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-7cvpr                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-no-preload-20220629114832-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 metrics-server-5c6f97fb75-8l9bk                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-6dcpk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-qmktl                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 68s   kube-proxy       
	  Normal  Starting                 83s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  83s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientPID
	  Normal  NodeReady                83s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeReady
	  Normal  RegisteredNode           70s   node-controller  Node no-preload-20220629114832-24356 event: Registered Node no-preload-20220629114832-24356 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [3d1d52e8fbac] <==
	* {"level":"info","ts":"2022-06-29T18:54:33.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T18:54:33.001Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T18:54:33.002Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T18:54:33.002Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T18:54:33.002Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T18:54:33.002Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T18:54:33.002Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T18:54:33.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T18:54:33.095Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220629114832-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T18:54:33.095Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:54:33.095Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:54:33.096Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T18:54:33.096Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T18:54:33.096Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.097Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.097Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.101Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T18:54:33.101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  18:56:01 up  1:03,  0 users,  load average: 0.41, 0.87, 1.15
	Linux no-preload-20220629114832-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b7d773db9f21] <==
	* I0629 18:54:37.817862       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 18:54:38.635432       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 18:54:38.641184       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0629 18:54:38.648885       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 18:54:38.728308       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 18:54:51.909926       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0629 18:54:52.009067       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 18:54:52.469667       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 18:54:54.201490       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.20.88]
	I0629 18:54:54.473744       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.223.208]
	I0629 18:54:54.482470       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.20.235]
	W0629 18:54:55.000271       1 handler_proxy.go:102] no RequestInfo found in the context
	W0629 18:54:55.000367       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 18:54:55.000375       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0629 18:54:55.000387       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 18:54:55.000388       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0629 18:54:55.001482       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 18:55:57.666864       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 18:55:57.666911       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 18:55:57.666925       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 18:55:57.667583       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 18:55:57.667593       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 18:55:57.667933       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [dcbaad6c5281] <==
	* I0629 18:54:52.110866       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mkj7b"
	I0629 18:54:52.112523       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0629 18:54:52.114483       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fcqdl"
	I0629 18:54:52.136657       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-mkj7b"
	I0629 18:54:54.006469       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 18:54:54.073412       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-8l9bk"
	I0629 18:54:54.280630       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 18:54:54.288071       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 18:54:54.295685       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 18:54:54.297310       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0629 18:54:54.298023       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 18:54:54.302860       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 18:54:54.303022       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 18:54:54.303270       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 18:54:54.311315       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 18:54:54.311925       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 18:54:54.312547       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 18:54:54.312567       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 18:54:54.365261       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-qmktl"
	I0629 18:54:54.367348       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-6dcpk"
	W0629 18:55:00.150194       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0629 18:55:21.303238       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 18:55:21.715126       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 18:55:57.913215       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 18:55:57.918204       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f2624e640979] <==
	* I0629 18:54:52.441561       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 18:54:52.441620       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 18:54:52.441660       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 18:54:52.466334       1 server_others.go:206] "Using iptables Proxier"
	I0629 18:54:52.466371       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 18:54:52.466379       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 18:54:52.466388       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 18:54:52.466432       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:54:52.466584       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:54:52.466836       1 server.go:661] "Version info" version="v1.24.2"
	I0629 18:54:52.466866       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:54:52.467910       1 config.go:317] "Starting service config controller"
	I0629 18:54:52.467941       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 18:54:52.468001       1 config.go:226] "Starting endpoint slice config controller"
	I0629 18:54:52.468024       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 18:54:52.468044       1 config.go:444] "Starting node config controller"
	I0629 18:54:52.468047       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 18:54:52.568469       1 shared_informer.go:262] Caches are synced for node config
	I0629 18:54:52.568497       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 18:54:52.568508       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [75538b819528] <==
	* W0629 18:54:35.763508       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 18:54:35.763760       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 18:54:35.763931       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 18:54:35.763969       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764011       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764024       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 18:54:35.764041       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 18:54:35.764119       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764219       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 18:54:35.764231       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 18:54:35.764298       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 18:54:35.764329       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764303       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 18:54:35.764340       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 18:54:35.764417       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0629 18:54:35.764479       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0629 18:54:36.619510       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 18:54:36.619723       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 18:54:36.709370       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 18:54:36.709554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 18:54:36.723566       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 18:54:36.723603       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 18:54:36.727912       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 18:54:36.727957       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0629 18:54:37.024685       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:49:58 UTC, end at Wed 2022-06-29 18:56:01 UTC. --
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.348250    9789 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.348434    9789 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.348468    9789 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.348491    9789 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.348629    9789 topology_manager.go:200] "Topology Admit Handler"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407272    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/285cc482-2cd9-4283-bc5a-1ef2e61213f8-tmp\") pod \"storage-provisioner\" (UID: \"285cc482-2cd9-4283-bc5a-1ef2e61213f8\") " pod="kube-system/storage-provisioner"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407331    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbdj2\" (UniqueName: \"kubernetes.io/projected/686867af-2f46-499f-a6b3-5322753bab16-kube-api-access-zbdj2\") pod \"kubernetes-dashboard-5fd5574d9f-qmktl\" (UID: \"686867af-2f46-499f-a6b3-5322753bab16\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-qmktl"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407396    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2716023f-a52f-44c4-858b-ec6667a36b0c-tmp-dir\") pod \"metrics-server-5c6f97fb75-8l9bk\" (UID: \"2716023f-a52f-44c4-858b-ec6667a36b0c\") " pod="kube-system/metrics-server-5c6f97fb75-8l9bk"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407439    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddbmp\" (UniqueName: \"kubernetes.io/projected/2716023f-a52f-44c4-858b-ec6667a36b0c-kube-api-access-ddbmp\") pod \"metrics-server-5c6f97fb75-8l9bk\" (UID: \"2716023f-a52f-44c4-858b-ec6667a36b0c\") " pod="kube-system/metrics-server-5c6f97fb75-8l9bk"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407468    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbcd50cd-0663-4e51-b103-e520c8d33ce3-config-volume\") pod \"coredns-6d4b75cb6d-fcqdl\" (UID: \"fbcd50cd-0663-4e51-b103-e520c8d33ce3\") " pod="kube-system/coredns-6d4b75cb6d-fcqdl"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407488    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/686867af-2f46-499f-a6b3-5322753bab16-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-qmktl\" (UID: \"686867af-2f46-499f-a6b3-5322753bab16\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-qmktl"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407509    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e81bb70-d310-485c-bf9e-ffa1f6584c1e-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-6dcpk\" (UID: \"1e81bb70-d310-485c-bf9e-ffa1f6584c1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-6dcpk"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407525    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpcd4\" (UniqueName: \"kubernetes.io/projected/1e81bb70-d310-485c-bf9e-ffa1f6584c1e-kube-api-access-rpcd4\") pod \"dashboard-metrics-scraper-dffd48c4c-6dcpk\" (UID: \"1e81bb70-d310-485c-bf9e-ffa1f6584c1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-6dcpk"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407540    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/470eaa9c-23cf-4ede-ab50-7ed59f41354a-xtables-lock\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407556    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvfw\" (UniqueName: \"kubernetes.io/projected/fbcd50cd-0663-4e51-b103-e520c8d33ce3-kube-api-access-2xvfw\") pod \"coredns-6d4b75cb6d-fcqdl\" (UID: \"fbcd50cd-0663-4e51-b103-e520c8d33ce3\") " pod="kube-system/coredns-6d4b75cb6d-fcqdl"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407585    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5sjn\" (UniqueName: \"kubernetes.io/projected/470eaa9c-23cf-4ede-ab50-7ed59f41354a-kube-api-access-f5sjn\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407613    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42x5\" (UniqueName: \"kubernetes.io/projected/285cc482-2cd9-4283-bc5a-1ef2e61213f8-kube-api-access-g42x5\") pod \"storage-provisioner\" (UID: \"285cc482-2cd9-4283-bc5a-1ef2e61213f8\") " pod="kube-system/storage-provisioner"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407633    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/470eaa9c-23cf-4ede-ab50-7ed59f41354a-lib-modules\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407660    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/470eaa9c-23cf-4ede-ab50-7ed59f41354a-kube-proxy\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407672    9789 reconciler.go:157] "Reconciler: start to sync state"
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:56:00.545074    9789 request.go:601] Waited for 1.13194371s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:00.572923    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220629114832-24356\" already exists" pod="kube-system/kube-apiserver-no-preload-20220629114832-24356"
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:00.801974    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220629114832-24356\" already exists" pod="kube-system/etcd-no-preload-20220629114832-24356"
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:00.948997    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220629114832-24356\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220629114832-24356"
	Jun 29 18:56:01 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:01.216905    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220629114832-24356\" already exists" pod="kube-system/kube-scheduler-no-preload-20220629114832-24356"
	
	* 
	* ==> kubernetes-dashboard [565d25698c92] <==
	* 2022/06/29 18:55:09 Using namespace: kubernetes-dashboard
	2022/06/29 18:55:09 Using in-cluster config to connect to apiserver
	2022/06/29 18:55:09 Using secret token for csrf signing
	2022/06/29 18:55:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 18:55:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 18:55:09 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 18:55:09 Generating JWE encryption key
	2022/06/29 18:55:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 18:55:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 18:55:10 Initializing JWE encryption key from synchronized object
	2022/06/29 18:55:10 Creating in-cluster Sidecar client
	2022/06/29 18:55:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 18:55:10 Serving insecurely on HTTP port: 9090
	2022/06/29 18:55:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 18:55:09 Starting overwatch
	
	* 
	* ==> storage-provisioner [18a1e2c19d2b] <==
	* I0629 18:54:55.335760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 18:54:55.343887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 18:54:55.343956       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 18:54:55.350009       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 18:54:55.350109       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b1c298e-b0d1-4b66-82b3-900d6c3a836c", APIVersion:"v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220629114832-24356_7eeea5c0-179a-45b4-bb79-0ab563f6601a became leader
	I0629 18:54:55.350229       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220629114832-24356_7eeea5c0-179a-45b4-bb79-0ab563f6601a!
	I0629 18:54:55.451647       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220629114832-24356_7eeea5c0-179a-45b4-bb79-0ab563f6601a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
helpers_test.go:254: (dbg) Done: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356: (1.170842418s)
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-8l9bk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 describe pod metrics-server-5c6f97fb75-8l9bk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220629114832-24356 describe pod metrics-server-5c6f97fb75-8l9bk: exit status 1 (293.294716ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-8l9bk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220629114832-24356 describe pod metrics-server-5c6f97fb75-8l9bk: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220629114832-24356
helpers_test.go:235: (dbg) docker inspect no-preload-20220629114832-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab",
	        "Created": "2022-06-29T18:48:34.666212575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238271,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:49:58.692676896Z",
	            "FinishedAt": "2022-06-29T18:49:56.792943722Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/hostname",
	        "HostsPath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/hosts",
	        "LogPath": "/var/lib/docker/containers/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab/24a08bf9f03fd8afc3d791762e795669118d5cb1d0d978266cfbf80c55d86fab-json.log",
	        "Name": "/no-preload-20220629114832-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220629114832-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220629114832-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9e9aedbf3bec43acee919ebc9f8512bf6b25bacbd1ae4f19ce517451157914c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220629114832-24356",
	                "Source": "/var/lib/docker/volumes/no-preload-20220629114832-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220629114832-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220629114832-24356",
	                "name.minikube.sigs.k8s.io": "no-preload-20220629114832-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cf5fd47197df49ad1e61e112021a02331bbbb2328e17ef80b5702122456d7d14",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60184"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60185"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60187"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60183"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cf5fd47197df",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220629114832-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "24a08bf9f03f",
	                        "no-preload-20220629114832-24356"
	                    ],
	                    "NetworkID": "280f12b17d38629a814fb7e64f456c21f5f6c8f0999ecd49f03be81ee0dfd3ee",
	                    "EndpointID": "c28bcd59329738d9d282cd041acbc33e3012203d89288366b496af7623c901f5",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220629114832-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220629114832-24356 logs -n 25: (2.759210335s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-20220629112951-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:45 PDT |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p false-20220629112951-24356                     | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:45 PDT |
	| start   | -p bridge-20220629112950-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:46 PDT |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |          |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	| delete  | -p calico-20220629112951-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:45 PDT |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:45 PDT | 29 Jun 22 11:46 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --enable-default-cni=true                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	| ssh     | -p bridge-20220629112950-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:46 PDT |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p bridge-20220629112950-24356                    | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:46 PDT |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | --memory=2048                                     |          |         |         |                     |                     |
	|         | --alsologtostderr                                 |          |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |          |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:46 PDT | 29 Jun 22 11:46 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | enable-default-cni-20220629112950-24356           |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:48 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:54 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:51 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:52 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 11:53:01
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 11:53:01.020541   39321 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:53:01.020674   39321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:53:01.020678   39321 out.go:309] Setting ErrFile to fd 2...
	I0629 11:53:01.020682   39321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:53:01.021047   39321 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:53:01.021305   39321 out.go:303] Setting JSON to false
	I0629 11:53:01.036590   39321 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10349,"bootTime":1656518432,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:53:01.036679   39321 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:53:01.057889   39321 out.go:177] * [old-k8s-version-20220629114717-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:53:01.100418   39321 notify.go:193] Checking for updates...
	I0629 11:53:01.121817   39321 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:53:01.142983   39321 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:53:01.164005   39321 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:53:01.185015   39321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:53:01.206165   39321 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:53:01.228648   39321 config.go:178] Loaded profile config "old-k8s-version-20220629114717-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:53:01.251012   39321 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
	I0629 11:53:01.271945   39321 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:53:01.341174   39321 docker.go:137] docker version: linux-20.10.16
	I0629 11:53:01.341305   39321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:53:01.464360   39321 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:53:01.403963306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:53:01.486719   39321 out.go:177] * Using the docker driver based on existing profile
	I0629 11:53:01.529615   39321 start.go:284] selected driver: docker
	I0629 11:53:01.529644   39321 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:01.529795   39321 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:53:01.533103   39321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:53:01.655473   39321 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:53:01.595697353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:53:01.655650   39321 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:53:01.655668   39321 cni.go:95] Creating CNI manager for ""
	I0629 11:53:01.655678   39321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:53:01.655687   39321 start_flags.go:310] config:
	{Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:01.677730   39321 out.go:177] * Starting control plane node old-k8s-version-20220629114717-24356 in cluster old-k8s-version-20220629114717-24356
	I0629 11:53:01.699300   39321 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:53:01.720322   39321 out.go:177] * Pulling base image ...
	I0629 11:53:01.762354   39321 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:53:01.762361   39321 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:53:01.762438   39321 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0629 11:53:01.762454   39321 cache.go:57] Caching tarball of preloaded images
	I0629 11:53:01.762660   39321 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:53:01.762692   39321 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0629 11:53:01.763793   39321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:53:01.827401   39321 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:53:01.827423   39321 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:53:01.827436   39321 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:53:01.827507   39321 start.go:352] acquiring machines lock for old-k8s-version-20220629114717-24356: {Name:mkeaf278b11a6771761242ef819919656a0edee3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:53:01.827595   39321 start.go:356] acquired machines lock for "old-k8s-version-20220629114717-24356" in 67.458µs
	I0629 11:53:01.827616   39321 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:53:01.827625   39321 fix.go:55] fixHost starting: 
	I0629 11:53:01.827860   39321 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:53:01.894263   39321 fix.go:103] recreateIfNeeded on old-k8s-version-20220629114717-24356: state=Stopped err=<nil>
	W0629 11:53:01.894295   39321 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:53:01.937823   39321 out.go:177] * Restarting existing docker container for "old-k8s-version-20220629114717-24356" ...
	I0629 11:52:57.932423   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:52:59.933063   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:02.433957   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:01.958803   39321 cli_runner.go:164] Run: docker start old-k8s-version-20220629114717-24356
	I0629 11:53:02.302625   39321 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220629114717-24356 --format={{.State.Status}}
	I0629 11:53:02.379116   39321 kic.go:416] container "old-k8s-version-20220629114717-24356" state is running.
	I0629 11:53:02.379733   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:02.458199   39321 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/config.json ...
	I0629 11:53:02.458585   39321 machine.go:88] provisioning docker machine ...
	I0629 11:53:02.458625   39321 ubuntu.go:169] provisioning hostname "old-k8s-version-20220629114717-24356"
	I0629 11:53:02.458691   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:02.536976   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:02.537219   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:02.537234   39321 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220629114717-24356 && echo "old-k8s-version-20220629114717-24356" | sudo tee /etc/hostname
	I0629 11:53:02.664885   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220629114717-24356
	
	I0629 11:53:02.664959   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:02.738843   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:02.739033   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:02.739051   39321 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220629114717-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220629114717-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220629114717-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:53:02.858236   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:53:02.858255   39321 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:53:02.858272   39321 ubuntu.go:177] setting up certificates
	I0629 11:53:02.858281   39321 provision.go:83] configureAuth start
	I0629 11:53:02.858345   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:02.929876   39321 provision.go:138] copyHostCerts
	I0629 11:53:02.929998   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:53:02.930014   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:53:02.930137   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:53:02.930410   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:53:02.930419   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:53:02.930485   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:53:02.930681   39321 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:53:02.930688   39321 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:53:02.930750   39321 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:53:02.930868   39321 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220629114717-24356 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220629114717-24356]
	I0629 11:53:03.099477   39321 provision.go:172] copyRemoteCerts
	I0629 11:53:03.099537   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:53:03.099583   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.171561   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:03.259681   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:53:03.277353   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0629 11:53:03.294474   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 11:53:03.311679   39321 provision.go:86] duration metric: configureAuth took 453.364787ms
	I0629 11:53:03.311691   39321 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:53:03.311820   39321 config.go:178] Loaded profile config "old-k8s-version-20220629114717-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0629 11:53:03.311873   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.383560   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.383791   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.383829   39321 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:53:03.505174   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:53:03.505190   39321 ubuntu.go:71] root file system type: overlay
	I0629 11:53:03.505337   39321 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:53:03.505412   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.576780   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.576940   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.576993   39321 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:53:03.702032   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:53:03.702109   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.773428   39321 main.go:134] libmachine: Using SSH client type: native
	I0629 11:53:03.773587   39321 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60321 <nil> <nil>}
	I0629 11:53:03.773602   39321 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:53:03.895380   39321 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:53:03.895393   39321 machine.go:91] provisioned docker machine in 1.436757152s
	I0629 11:53:03.895403   39321 start.go:306] post-start starting for "old-k8s-version-20220629114717-24356" (driver="docker")
	I0629 11:53:03.895408   39321 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:53:03.895461   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:53:03.895508   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:03.971006   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.056695   39321 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:53:04.060270   39321 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:53:04.060284   39321 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:53:04.060291   39321 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:53:04.060295   39321 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:53:04.060306   39321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:53:04.060434   39321 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:53:04.060599   39321 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:53:04.060774   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:53:04.067711   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:53:04.085232   39321 start.go:309] post-start completed in 189.815092ms
	I0629 11:53:04.085301   39321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:53:04.085359   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.156347   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.238000   39321 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:53:04.242481   39321 fix.go:57] fixHost completed within 2.414782183s
	I0629 11:53:04.242492   39321 start.go:81] releasing machines lock for "old-k8s-version-20220629114717-24356", held for 2.414817597s
	I0629 11:53:04.242573   39321 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220629114717-24356
	I0629 11:53:04.313552   39321 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:53:04.313558   39321 ssh_runner.go:195] Run: systemctl --version
	I0629 11:53:04.313633   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.313644   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:04.389089   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.391746   39321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60321 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/old-k8s-version-20220629114717-24356/id_rsa Username:docker}
	I0629 11:53:04.950787   39321 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:53:04.961037   39321 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:53:04.961098   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:53:04.972557   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:53:04.985220   39321 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:53:05.057913   39321 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:53:05.127457   39321 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:53:05.201096   39321 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:53:05.403377   39321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:53:05.442119   39321 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:53:05.520315   39321 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0629 11:53:05.520496   39321 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220629114717-24356 dig +short host.docker.internal
	I0629 11:53:05.646740   39321 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:53:05.646853   39321 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:53:05.651058   39321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:53:05.662556   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:05.733785   39321 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 11:53:05.733877   39321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:53:05.763532   39321 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:53:05.763547   39321 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:53:05.763613   39321 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:53:05.793235   39321 docker.go:602] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0629 11:53:05.793253   39321 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:53:05.793340   39321 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:53:05.867180   39321 cni.go:95] Creating CNI manager for ""
	I0629 11:53:05.867191   39321 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:53:05.867206   39321 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:53:05.867219   39321 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220629114717-24356 NodeName:old-k8s-version-20220629114717-24356 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:53:05.867334   39321 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220629114717-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220629114717-24356
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:53:05.867405   39321 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220629114717-24356 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:53:05.867467   39321 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0629 11:53:05.874886   39321 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:53:05.874948   39321 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:53:05.881929   39321 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0629 11:53:05.894526   39321 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:53:05.906971   39321 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0629 11:53:05.919357   39321 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:53:05.923010   39321 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:53:05.934256   39321 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356 for IP: 192.168.76.2
	I0629 11:53:05.934374   39321 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:53:05.934432   39321 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:53:05.934518   39321 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/client.key
	I0629 11:53:05.934586   39321 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key.31bdca25
	I0629 11:53:05.934644   39321 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key
	I0629 11:53:05.934860   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:53:05.934902   39321 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:53:05.934916   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:53:05.934951   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:53:05.934990   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:53:05.935032   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:53:05.935095   39321 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:53:05.935616   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:53:05.952783   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 11:53:05.969962   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:53:05.986903   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/old-k8s-version-20220629114717-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:53:06.004120   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:53:04.931647   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:06.931781   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:06.022586   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:53:06.059761   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:53:06.076874   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:53:06.093750   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:53:06.110970   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:53:06.128088   39321 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:53:06.146358   39321 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:53:06.159473   39321 ssh_runner.go:195] Run: openssl version
	I0629 11:53:06.164773   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:53:06.172822   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.176828   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.176875   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:53:06.182239   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:53:06.189362   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:53:06.197559   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.201505   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.201555   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:53:06.207119   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:53:06.214849   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:53:06.222597   39321 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.226582   39321 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.226621   39321 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:53:06.231864   39321 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:53:06.239364   39321 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220629114717-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220629114717-24356 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:53:06.239478   39321 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:53:06.268678   39321 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:53:06.276184   39321 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:53:06.276201   39321 kubeadm.go:626] restartCluster start
	I0629 11:53:06.276249   39321 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:53:06.282969   39321 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.283027   39321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220629114717-24356
	I0629 11:53:06.354486   39321 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220629114717-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:53:06.354648   39321 kubeconfig.go:127] "old-k8s-version-20220629114717-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 11:53:06.354967   39321 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:53:06.356063   39321 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:53:06.363888   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.363980   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.372296   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.572897   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.573039   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.583383   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.773156   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.773259   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.783501   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:06.972425   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:06.972514   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:06.981322   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.173227   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.173323   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.183915   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.373230   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.373327   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.383900   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.573955   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.574107   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.584389   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.774471   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.774706   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.784989   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:07.972462   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:07.972554   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:07.982777   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.172517   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.172614   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.183424   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.372918   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.373101   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.383561   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.572500   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.572573   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.582518   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.772633   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.772771   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.783206   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:08.972740   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:08.972875   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:08.983311   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.172733   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.172846   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.183530   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.372639   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.372862   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.383814   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.383824   39321 api_server.go:165] Checking apiserver status ...
	I0629 11:53:09.383870   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:53:09.392053   39321 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:53:09.392064   39321 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 11:53:09.392072   39321 kubeadm.go:1092] stopping kube-system containers ...
	I0629 11:53:09.392131   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:53:09.420212   39321 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 11:53:09.433676   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:53:09.441303   39321 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Jun 29 18:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jun 29 18:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jun 29 18:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5731 Jun 29 18:49 /etc/kubernetes/scheduler.conf
	
	I0629 11:53:09.441356   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 11:53:09.448705   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 11:53:09.455863   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 11:53:09.463598   39321 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 11:53:09.470944   39321 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:53:09.479430   39321 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 11:53:09.479451   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:09.530261   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.632194   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.101882408s)
	I0629 11:53:10.632212   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.847331   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.904889   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:53:10.963035   39321 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:53:10.963098   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:08.931920   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:11.430843   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:11.471629   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:11.971653   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:12.471604   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:12.973656   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.471720   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.971792   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:14.473862   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:14.972657   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:15.472511   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:15.973033   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:13.432531   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:15.934415   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:16.472375   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:16.972679   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:17.471980   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:17.972744   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.472610   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.972373   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:19.471947   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:19.972438   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:20.472581   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:20.972723   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:18.432311   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:20.432454   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:21.473577   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:21.972016   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.472026   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.973315   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:23.471896   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:23.972447   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:24.471973   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:24.973386   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:25.473637   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:25.972648   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:22.932135   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:24.933190   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:27.432928   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:26.472198   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:26.972657   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:27.472346   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:27.972638   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:28.473151   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:28.972205   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.472234   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.972717   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:30.472697   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:30.972995   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:29.433003   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:31.433480   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:31.472433   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:31.972406   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:32.472190   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:32.974199   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.472460   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.972993   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:34.472909   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:34.972289   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:35.473152   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:35.972577   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:33.433642   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:35.932766   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:36.474436   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:36.973628   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.472308   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.973415   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:38.472767   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:38.974410   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:39.473141   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:39.972605   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:40.472482   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:40.972864   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:37.933277   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:40.432620   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:42.433936   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:41.472723   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:41.974616   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:42.472627   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:42.972675   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:43.472686   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:43.973714   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.473536   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.973783   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:45.472730   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:45.972999   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:44.434699   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:46.933086   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:46.473581   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:46.973015   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:47.472857   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:47.972929   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:48.474126   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:48.972902   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.472981   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.972804   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:50.473092   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:50.973396   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:49.434292   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:51.434828   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:51.473121   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:51.973014   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:52.473008   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:52.973431   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.472906   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.973182   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:54.473436   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:54.974299   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:55.473284   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:55.973150   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:53.932198   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:56.434724   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:53:56.474409   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:56.973527   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:57.472991   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:57.972998   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.473348   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.973142   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:59.473282   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:59.973927   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:00.473094   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:00.974069   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:53:58.935361   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:01.434028   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:01.474438   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:01.973191   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:02.473214   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:02.973108   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.475258   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.974208   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:04.473408   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:04.975325   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:05.473242   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:05.974115   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:03.933474   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:05.935169   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:06.474575   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:06.973453   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:07.473535   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:07.973316   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:08.473278   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:08.974032   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:09.473400   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:09.973400   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:10.473858   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:10.973493   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:11.005027   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.005047   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:11.005174   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:08.434385   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:10.435932   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:11.034514   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.044684   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:11.044771   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:11.074864   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.074876   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:11.074948   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:11.107049   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.107060   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:11.107125   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:11.136126   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.136137   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:11.136202   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:11.166106   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.166123   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:11.166197   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:11.195233   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.195244   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:11.195311   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:11.224314   39321 logs.go:274] 0 containers: []
	W0629 11:54:11.224326   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:11.224333   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:11.224341   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:11.238284   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:11.238295   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:13.292784   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054415695s)
	I0629 11:54:13.292934   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:13.292941   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:13.333282   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:13.333295   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:13.345303   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:13.345316   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:13.397489   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:15.899245   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:15.973676   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:16.003497   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.003509   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:16.003567   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:12.934751   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:15.435329   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:16.033526   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.044819   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:16.044901   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:16.076936   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.076948   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:16.077013   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:16.107083   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.107095   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:16.107151   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:16.138323   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.138335   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:16.138389   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:16.167336   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.167348   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:16.167417   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:16.198137   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.198149   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:16.198204   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:16.227979   39321 logs.go:274] 0 containers: []
	W0629 11:54:16.227992   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:16.227999   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:16.228012   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:16.267349   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:16.267364   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:16.279505   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:16.279520   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:16.331710   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:16.331728   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:16.331736   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:16.345394   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:16.345405   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:18.399883   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05440587s)
	I0629 11:54:20.900466   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:20.973806   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:21.004342   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.004356   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:21.004415   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:17.934521   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:20.436650   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:21.034479   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.045019   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:21.045125   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:21.075792   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.075805   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:21.075876   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:21.113638   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.113651   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:21.113708   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:21.143417   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.143429   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:21.143492   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:21.172595   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.172607   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:21.172672   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:21.201866   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.201878   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:21.201937   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:21.230654   39321 logs.go:274] 0 containers: []
	W0629 11:54:21.230664   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:21.230671   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:21.230677   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:21.271551   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:21.271572   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:21.284291   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:21.284305   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:21.340570   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:21.340584   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:21.340593   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:21.354206   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:21.354218   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:23.410357   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056065961s)
	I0629 11:54:25.911253   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:25.974183   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:26.006527   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.006539   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:26.006593   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:22.935934   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:25.434546   39013 pod_ready.go:102] pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace has status "Ready":"False"
	I0629 11:54:27.928494   39013 pod_ready.go:81] duration metric: took 4m0.013477475s waiting for pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace to be "Ready" ...
	E0629 11:54:27.928518   39013 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-ws5qk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 11:54:27.928588   39013 pod_ready.go:38] duration metric: took 4m15.068434231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:54:27.928632   39013 kubeadm.go:630] restartCluster took 4m25.017561497s
	W0629 11:54:27.928753   39013 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 11:54:27.928782   39013 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0629 11:54:30.406051   39013 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.477179412s)
	I0629 11:54:30.406109   39013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:54:30.416106   39013 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:54:30.423937   39013 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:54:30.423981   39013 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:54:30.431422   39013 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:54:30.431447   39013 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:54:26.034855   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.045013   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:26.045108   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:26.075260   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.075272   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:26.075332   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:26.104633   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.104645   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:26.104702   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:26.134389   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.134402   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:26.134460   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:26.165666   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.165678   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:26.165744   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:26.196944   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.196959   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:26.197023   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:26.224887   39321 logs.go:274] 0 containers: []
	W0629 11:54:26.224902   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:26.224910   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:26.224917   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:26.264545   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:26.264559   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:26.275868   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:26.275882   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:26.329330   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:26.329346   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:26.329353   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:26.343299   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:26.343311   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:28.396021   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052636665s)
	I0629 11:54:30.896828   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:30.973978   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:31.008212   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.008225   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:31.008285   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:30.710947   39013 out.go:204]   - Generating certificates and keys ...
	I0629 11:54:31.365688   39013 out.go:204]   - Booting up control plane ...
	I0629 11:54:31.041367   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.045055   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:31.045123   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:31.077818   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.077830   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:31.077893   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:31.108115   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.108128   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:31.108192   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:31.138455   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.138469   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:31.138532   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:31.169314   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.169329   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:31.169389   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:31.199503   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.199515   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:31.199584   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:31.230870   39321 logs.go:274] 0 containers: []
	W0629 11:54:31.230884   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:31.230893   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:31.230912   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:31.274860   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:31.274876   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:31.289572   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:31.289588   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:31.345087   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:31.345100   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:31.345106   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:31.362082   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:31.362095   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:33.419132   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056963483s)
	I0629 11:54:35.919752   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:35.976084   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:36.006737   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.006750   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:36.006814   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:36.036631   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.045922   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:36.045984   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:36.075280   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.075293   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:36.075359   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:36.105709   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.105720   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:36.105789   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:36.135433   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.135445   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:36.135509   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:36.164044   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.164057   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:36.164116   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:36.193256   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.193269   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:36.193331   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:36.221611   39321 logs.go:274] 0 containers: []
	W0629 11:54:36.221623   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:36.221630   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:36.221636   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:36.261723   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:36.261740   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:36.273915   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:36.273934   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:36.332462   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:36.332479   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:36.332487   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:36.346115   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:36.346128   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:38.400565   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054363884s)
	I0629 11:54:40.901227   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:40.976044   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:41.005727   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.005739   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:41.005796   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:38.418030   39013 out.go:204]   - Configuring RBAC rules ...
	I0629 11:54:38.793760   39013 cni.go:95] Creating CNI manager for ""
	I0629 11:54:38.793771   39013 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:54:38.793794   39013 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 11:54:38.793877   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=no-preload-20220629114832-24356 minikube.k8s.io/updated_at=2022_06_29T11_54_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:38.793879   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:38.963621   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:38.963622   39013 ops.go:34] apiserver oom_adj: -16
	I0629 11:54:39.516051   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:40.015548   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:40.516675   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:41.015806   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:41.515804   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:42.016197   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:41.036553   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.045422   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:41.045478   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:41.075203   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.075216   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:41.075276   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:41.108156   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.108168   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:41.108227   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:41.137946   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.137957   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:41.138020   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:41.167765   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.167777   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:41.167846   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:41.197634   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.197645   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:41.197700   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:41.226006   39321 logs.go:274] 0 containers: []
	W0629 11:54:41.226019   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:41.226025   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:41.226036   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:41.278933   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:41.278945   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:41.278952   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:41.292648   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:41.292661   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:43.349789   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057054339s)
	I0629 11:54:43.349901   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:43.349908   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:43.389415   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:43.389428   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:45.901944   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:45.976279   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:46.007239   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.007251   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:46.007317   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:42.517669   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:43.015630   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:43.517707   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:44.015840   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:44.515768   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:45.016492   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:45.516201   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:46.016051   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:46.515723   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:47.017768   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:46.038729   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.045289   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:46.045348   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:46.080579   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.080656   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:46.080727   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:46.110618   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.110630   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:46.110691   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:46.139982   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.139994   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:46.140049   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:46.168606   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.168620   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:46.168685   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:46.198162   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.198175   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:46.198238   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:46.226969   39321 logs.go:274] 0 containers: []
	W0629 11:54:46.226980   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:46.226987   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:46.226995   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:48.280086   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053017479s)
	I0629 11:54:48.280198   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:48.280208   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:48.321498   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:48.321516   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:48.333730   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:48.333746   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:48.386942   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:48.386954   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:48.386963   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:50.902020   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:50.976006   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:51.016056   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.016066   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:51.016114   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:47.516204   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:48.016295   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:48.515997   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:49.015736   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:49.517555   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:50.016719   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:50.516173   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:51.015839   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:51.516070   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:52.016151   39013 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 11:54:52.076730   39013 kubeadm.go:1045] duration metric: took 13.282526806s to wait for elevateKubeSystemPrivileges.
	I0629 11:54:52.076746   39013 kubeadm.go:397] StartCluster complete in 4m49.203921961s
	I0629 11:54:52.076764   39013 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:54:52.076848   39013 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:54:52.077402   39013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:54:52.592513   39013 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220629114832-24356" rescaled to 1
	I0629 11:54:52.592549   39013 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 11:54:52.592571   39013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 11:54:52.592603   39013 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 11:54:52.592801   39013 config.go:178] Loaded profile config "no-preload-20220629114832-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:54:52.613503   39013 out.go:177] * Verifying Kubernetes components...
	I0629 11:54:52.613574   39013 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.613575   39013 addons.go:65] Setting dashboard=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.655332   39013 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220629114832-24356"
	W0629 11:54:52.655344   39013 addons.go:162] addon storage-provisioner should already be in state true
	I0629 11:54:52.655336   39013 addons.go:153] Setting addon dashboard=true in "no-preload-20220629114832-24356"
	I0629 11:54:52.655355   39013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0629 11:54:52.655364   39013 addons.go:162] addon dashboard should already be in state true
	I0629 11:54:52.613587   39013 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.655401   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.655406   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.613581   39013 addons.go:65] Setting metrics-server=true in profile "no-preload-20220629114832-24356"
	I0629 11:54:52.655414   39013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220629114832-24356"
	I0629 11:54:52.655429   39013 addons.go:153] Setting addon metrics-server=true in "no-preload-20220629114832-24356"
	W0629 11:54:52.655437   39013 addons.go:162] addon metrics-server should already be in state true
	I0629 11:54:52.655470   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.655688   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.656760   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.656831   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.658957   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.772237   39013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 11:54:52.830222   39013 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0629 11:54:52.772251   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:52.788521   39013 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220629114832-24356"
	I0629 11:54:52.867959   39013 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 11:54:52.809280   39013 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0629 11:54:52.868011   39013 addons.go:162] addon default-storageclass should already be in state true
	I0629 11:54:52.915294   39013 host.go:66] Checking if "no-preload-20220629114832-24356" exists ...
	I0629 11:54:52.915374   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 11:54:52.937447   39013 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:54:52.958108   39013 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0629 11:54:52.958107   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 11:54:52.958132   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 11:54:52.995162   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 11:54:52.958546   39013 cli_runner.go:164] Run: docker container inspect no-preload-20220629114832-24356 --format={{.State.Status}}
	I0629 11:54:52.995176   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 11:54:52.995225   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:52.995235   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:52.995234   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:53.015922   39013 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220629114832-24356" to be "Ready" ...
	I0629 11:54:53.046806   39013 node_ready.go:49] node "no-preload-20220629114832-24356" has status "Ready":"True"
	I0629 11:54:53.046823   39013 node_ready.go:38] duration metric: took 30.874814ms waiting for node "no-preload-20220629114832-24356" to be "Ready" ...
	I0629 11:54:53.046832   39013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:54:53.056385   39013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-fcqdl" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:53.119675   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.120197   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.120684   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.122204   39013 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 11:54:53.122217   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 11:54:53.122275   39013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220629114832-24356
	I0629 11:54:53.208303   39013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60184 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/no-preload-20220629114832-24356/id_rsa Username:docker}
	I0629 11:54:53.263548   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 11:54:53.270566   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 11:54:53.270596   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 11:54:53.280968   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 11:54:53.280980   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 11:54:53.361187   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 11:54:53.361202   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 11:54:53.369439   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 11:54:53.369453   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 11:54:53.446321   39013 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 11:54:53.446336   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 11:54:53.453020   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 11:54:53.453040   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 11:54:53.467130   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 11:54:53.472216   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 11:54:53.472227   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 11:54:53.479863   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 11:54:53.553558   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 11:54:53.553575   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 11:54:53.589176   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 11:54:53.589190   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 11:54:53.667441   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 11:54:53.667461   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 11:54:53.685591   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 11:54:53.685603   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 11:54:53.746807   39013 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 11:54:53.746822   39013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 11:54:53.760881   39013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 11:54:53.980512   39013 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.150241193s)
	I0629 11:54:53.980529   39013 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 11:54:54.250331   39013 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220629114832-24356"
	I0629 11:54:54.547583   39013 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0629 11:54:51.048022   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.048034   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:51.048093   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:51.081074   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.081085   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:51.081143   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:51.112957   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.112968   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:51.113030   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:51.145997   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.146009   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:51.146068   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:51.176395   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.176407   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:51.176469   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:51.208630   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.208645   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:51.208708   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:51.239987   39321 logs.go:274] 0 containers: []
	W0629 11:54:51.240003   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:51.240012   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:51.240021   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:51.287920   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:51.287939   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:51.302964   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:51.302985   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:51.362169   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:51.362179   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:51.362186   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:51.376235   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:51.376248   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:53.427692   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051370993s)
	I0629 11:54:55.928476   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:55.976666   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:54:56.005708   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.005720   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:54:56.005780   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:54:54.568638   39013 addons.go:414] enableAddons completed in 1.976002698s
	I0629 11:54:54.577787   39013 pod_ready.go:92] pod "coredns-6d4b75cb6d-fcqdl" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:54.577802   39013 pod_ready.go:81] duration metric: took 1.52135183s waiting for pod "coredns-6d4b75cb6d-fcqdl" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:54.577811   39013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-mkj7b" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.088829   39013 pod_ready.go:92] pod "coredns-6d4b75cb6d-mkj7b" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.088843   39013 pod_ready.go:81] duration metric: took 1.510981571s waiting for pod "coredns-6d4b75cb6d-mkj7b" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.088850   39013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.095348   39013 pod_ready.go:92] pod "etcd-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.095358   39013 pod_ready.go:81] duration metric: took 6.502967ms waiting for pod "etcd-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.095365   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.101367   39013 pod_ready.go:92] pod "kube-apiserver-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.101377   39013 pod_ready.go:81] duration metric: took 6.00742ms waiting for pod "kube-apiserver-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.101384   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.107696   39013 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.107705   39013 pod_ready.go:81] duration metric: took 6.316155ms waiting for pod "kube-controller-manager-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.107711   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7cvpr" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.219241   39013 pod_ready.go:92] pod "kube-proxy-7cvpr" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.219251   39013 pod_ready.go:81] duration metric: took 111.532331ms waiting for pod "kube-proxy-7cvpr" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.219257   39013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.620319   39013 pod_ready.go:92] pod "kube-scheduler-no-preload-20220629114832-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:54:56.620332   39013 pod_ready.go:81] duration metric: took 401.057657ms waiting for pod "kube-scheduler-no-preload-20220629114832-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:54:56.620339   39013 pod_ready.go:38] duration metric: took 3.573386669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:54:56.620353   39013 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:54:56.620418   39013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:54:56.630540   39013 api_server.go:71] duration metric: took 4.037851613s to wait for apiserver process to appear ...
	I0629 11:54:56.630553   39013 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:54:56.630560   39013 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60183/healthz ...
	I0629 11:54:56.635607   39013 api_server.go:266] https://127.0.0.1:60183/healthz returned 200:
	ok
	I0629 11:54:56.636666   39013 api_server.go:140] control plane version: v1.24.2
	I0629 11:54:56.636674   39013 api_server.go:130] duration metric: took 6.116861ms to wait for apiserver health ...
	I0629 11:54:56.636678   39013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:54:56.823080   39013 system_pods.go:59] 9 kube-system pods found
	I0629 11:54:56.823092   39013 system_pods.go:61] "coredns-6d4b75cb6d-fcqdl" [fbcd50cd-0663-4e51-b103-e520c8d33ce3] Running
	I0629 11:54:56.823096   39013 system_pods.go:61] "coredns-6d4b75cb6d-mkj7b" [cdff1c2d-7c51-46bb-bd66-28e55f071f74] Running
	I0629 11:54:56.823099   39013 system_pods.go:61] "etcd-no-preload-20220629114832-24356" [f20e1065-ccd6-4e6b-9f89-19a78c82d84c] Running
	I0629 11:54:56.823103   39013 system_pods.go:61] "kube-apiserver-no-preload-20220629114832-24356" [ecc08c98-b6c2-44b1-892f-6190e6bf0f52] Running
	I0629 11:54:56.823106   39013 system_pods.go:61] "kube-controller-manager-no-preload-20220629114832-24356" [9d831661-e795-486e-9acf-c95e6bfe23b9] Running
	I0629 11:54:56.823110   39013 system_pods.go:61] "kube-proxy-7cvpr" [470eaa9c-23cf-4ede-ab50-7ed59f41354a] Running
	I0629 11:54:56.823114   39013 system_pods.go:61] "kube-scheduler-no-preload-20220629114832-24356" [5909a6d8-7ca6-4042-9a76-dbd460c37ea9] Running
	I0629 11:54:56.823120   39013 system_pods.go:61] "metrics-server-5c6f97fb75-8l9bk" [2716023f-a52f-44c4-858b-ec6667a36b0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 11:54:56.823127   39013 system_pods.go:61] "storage-provisioner" [285cc482-2cd9-4283-bc5a-1ef2e61213f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 11:54:56.823132   39013 system_pods.go:74] duration metric: took 186.444677ms to wait for pod list to return data ...
	I0629 11:54:56.823137   39013 default_sa.go:34] waiting for default service account to be created ...
	I0629 11:54:57.019766   39013 default_sa.go:45] found service account: "default"
	I0629 11:54:57.019779   39013 default_sa.go:55] duration metric: took 196.631815ms for default service account to be created ...
	I0629 11:54:57.019785   39013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 11:54:57.222905   39013 system_pods.go:86] 9 kube-system pods found
	I0629 11:54:57.222918   39013 system_pods.go:89] "coredns-6d4b75cb6d-fcqdl" [fbcd50cd-0663-4e51-b103-e520c8d33ce3] Running
	I0629 11:54:57.222923   39013 system_pods.go:89] "coredns-6d4b75cb6d-mkj7b" [cdff1c2d-7c51-46bb-bd66-28e55f071f74] Running
	I0629 11:54:57.222927   39013 system_pods.go:89] "etcd-no-preload-20220629114832-24356" [f20e1065-ccd6-4e6b-9f89-19a78c82d84c] Running
	I0629 11:54:57.222930   39013 system_pods.go:89] "kube-apiserver-no-preload-20220629114832-24356" [ecc08c98-b6c2-44b1-892f-6190e6bf0f52] Running
	I0629 11:54:57.222934   39013 system_pods.go:89] "kube-controller-manager-no-preload-20220629114832-24356" [9d831661-e795-486e-9acf-c95e6bfe23b9] Running
	I0629 11:54:57.222939   39013 system_pods.go:89] "kube-proxy-7cvpr" [470eaa9c-23cf-4ede-ab50-7ed59f41354a] Running
	I0629 11:54:57.222942   39013 system_pods.go:89] "kube-scheduler-no-preload-20220629114832-24356" [5909a6d8-7ca6-4042-9a76-dbd460c37ea9] Running
	I0629 11:54:57.222948   39013 system_pods.go:89] "metrics-server-5c6f97fb75-8l9bk" [2716023f-a52f-44c4-858b-ec6667a36b0c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 11:54:57.222955   39013 system_pods.go:89] "storage-provisioner" [285cc482-2cd9-4283-bc5a-1ef2e61213f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 11:54:57.222960   39013 system_pods.go:126] duration metric: took 203.164956ms to wait for k8s-apps to be running ...
	I0629 11:54:57.222966   39013 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 11:54:57.223017   39013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:54:57.232711   39013 system_svc.go:56] duration metric: took 9.738308ms WaitForService to wait for kubelet.
	I0629 11:54:57.232724   39013 kubeadm.go:572] duration metric: took 4.640018458s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 11:54:57.232738   39013 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:54:57.420496   39013 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:54:57.420509   39013 node_conditions.go:123] node cpu capacity is 6
	I0629 11:54:57.420517   39013 node_conditions.go:105] duration metric: took 187.769826ms to run NodePressure ...
	I0629 11:54:57.420539   39013 start.go:213] waiting for startup goroutines ...
	I0629 11:54:57.450003   39013 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 11:54:57.471079   39013 out.go:177] * Done! kubectl is now configured to use "no-preload-20220629114832-24356" cluster and "default" namespace by default
	I0629 11:54:56.034443   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.049359   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:54:56.049422   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:54:56.078685   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.078697   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:54:56.078752   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:54:56.119131   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.119143   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:54:56.119202   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:54:56.147731   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.147743   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:54:56.147801   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:54:56.176982   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.176994   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:54:56.177049   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:54:56.205600   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.205613   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:54:56.205667   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:54:56.234552   39321 logs.go:274] 0 containers: []
	W0629 11:54:56.234564   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:54:56.234570   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:54:56.234576   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:54:56.275806   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:54:56.275822   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:54:56.288255   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:54:56.288270   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:54:56.343278   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:54:56.343289   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:54:56.343296   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:54:56.357151   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:54:56.357163   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:54:58.409308   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052071728s)
	I0629 11:55:00.909863   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:00.975039   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:01.009426   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.009439   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:01.009500   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:01.058626   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.058638   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:01.058715   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:01.096270   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.096285   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:01.096370   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:01.130375   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.130388   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:01.130446   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:01.167367   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.167379   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:01.167443   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:01.200318   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.200330   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:01.200390   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:01.231557   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.231570   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:01.231629   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:01.266142   39321 logs.go:274] 0 containers: []
	W0629 11:55:01.266179   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:01.266211   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:01.266225   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:03.348388   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.082087684s)
	I0629 11:55:03.348526   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:03.348534   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:03.393758   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:03.393788   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:03.412557   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:03.412576   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:03.479793   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:03.479808   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:03.479818   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:05.995421   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:06.477124   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:06.508598   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.508609   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:06.508668   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:06.571634   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.571648   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:06.571709   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:06.603733   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.603750   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:06.603821   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:06.641504   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.641540   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:06.641612   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:06.680642   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.680654   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:06.680718   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:06.719154   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.719166   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:06.719243   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:06.752660   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.752672   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:06.752781   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:06.790338   39321 logs.go:274] 0 containers: []
	W0629 11:55:06.790350   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:06.790357   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:06.790364   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:06.839137   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:06.839156   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:06.855958   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:06.855978   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:06.924265   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:06.924279   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:06.924285   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:06.947627   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:06.947646   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:09.012320   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064598664s)
	I0629 11:55:11.512790   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:11.975458   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:12.007895   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.007907   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:12.007963   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:12.039685   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.039696   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:12.039751   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:12.068287   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.068306   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:12.068380   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:12.097250   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.097262   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:12.097329   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:12.125908   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.125920   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:12.125974   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:12.155445   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.155457   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:12.155513   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:12.185314   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.185326   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:12.185383   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:12.214629   39321 logs.go:274] 0 containers: []
	W0629 11:55:12.214639   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:12.214646   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:12.214653   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:12.271182   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:12.271194   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:12.271204   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:12.286914   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:12.286928   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:14.343425   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056423824s)
	I0629 11:55:14.343535   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:14.343543   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:14.383870   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:14.383883   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:16.897690   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:16.976654   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:17.012584   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.012596   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:17.012657   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:17.044046   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.044058   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:17.044124   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:17.074296   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.074308   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:17.074365   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:17.115757   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.115768   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:17.115824   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:17.145895   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.145906   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:17.145962   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:17.175767   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.175777   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:17.175843   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:17.205469   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.205480   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:17.205540   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:17.234651   39321 logs.go:274] 0 containers: []
	W0629 11:55:17.234663   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:17.234670   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:17.234677   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:17.277938   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:17.277952   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:17.289697   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:17.289715   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:17.341609   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:17.341618   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:17.341625   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:17.355655   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:17.355667   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:19.408285   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052537682s)
	I0629 11:55:21.910724   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:21.975500   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:22.004837   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.004854   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:22.004921   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:22.035732   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.035743   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:22.035801   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:22.069625   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.069636   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:22.069692   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:22.099818   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.099832   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:22.099880   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:22.130176   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.130188   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:22.130247   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:22.162002   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.162019   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:22.162078   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:22.190365   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.190379   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:22.190442   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:22.219748   39321 logs.go:274] 0 containers: []
	W0629 11:55:22.219761   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:22.219767   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:22.219777   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:22.273321   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:22.273337   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:22.273352   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:22.287787   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:22.287800   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:24.342535   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054658523s)
	I0629 11:55:24.342644   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:24.342651   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:24.382581   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:24.382593   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:26.895697   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:26.977747   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:27.008926   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.008938   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:27.009000   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:27.038100   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.038111   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:27.038168   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:27.067169   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.067180   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:27.067236   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:27.095625   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.095637   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:27.095694   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:27.125107   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.125118   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:27.125175   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:27.154968   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.154982   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:27.155040   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:27.183779   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.183791   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:27.183850   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:27.212801   39321 logs.go:274] 0 containers: []
	W0629 11:55:27.212813   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:27.212820   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:27.212827   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:27.253498   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:27.253514   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:27.265985   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:27.266001   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:27.322114   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:27.322123   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:27.322130   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:27.335806   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:27.335821   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:29.392403   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056508883s)
	I0629 11:55:31.893240   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:31.977413   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:32.008956   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.008971   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:32.009028   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:32.038201   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.038212   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:32.038267   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:32.066990   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.067002   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:32.067057   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:32.097577   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.097593   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:32.097667   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:32.127554   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.127567   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:32.127629   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:32.156429   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.156443   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:32.156507   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:32.185611   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.185623   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:32.185681   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:32.214323   39321 logs.go:274] 0 containers: []
	W0629 11:55:32.214335   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:32.214342   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:32.214348   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:32.267585   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:32.267595   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:32.267601   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:32.282076   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:32.282088   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:34.339416   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057253442s)
	I0629 11:55:34.339525   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:34.339531   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:34.379921   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:34.379933   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:36.894519   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:36.975922   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:37.010242   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.010263   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:37.010330   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:37.040881   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.040893   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:37.040949   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:37.070230   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.070242   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:37.070308   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:37.101292   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.101303   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:37.101353   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:37.131101   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.131113   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:37.131173   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:37.159540   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.159552   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:37.159610   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:37.189520   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.189532   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:37.189588   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:37.219222   39321 logs.go:274] 0 containers: []
	W0629 11:55:37.219233   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:37.219241   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:37.219248   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:37.259017   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:37.259032   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:37.270684   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:37.270696   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:37.322386   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:37.322399   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:37.322407   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:37.335982   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:37.335995   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:39.390442   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054372053s)
	I0629 11:55:41.891223   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:41.978245   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:42.009313   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.009326   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:42.009380   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:42.039076   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.039089   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:42.039146   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:42.068464   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.068478   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:42.068534   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:42.097800   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.097811   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:42.097866   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:42.127026   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.127038   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:42.127093   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:42.156370   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.156382   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:42.156444   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:42.186834   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.186846   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:42.186901   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:42.215822   39321 logs.go:274] 0 containers: []
	W0629 11:55:42.215835   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:42.215846   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:42.215855   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:42.230305   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:42.230319   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:44.285629   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055236751s)
	I0629 11:55:44.285764   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:44.285771   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:44.325646   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:44.325660   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:44.337146   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:44.337159   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:44.389786   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:46.891554   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:46.978341   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:47.009917   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.009929   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:47.009985   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:47.038523   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.038534   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:47.038588   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:47.067903   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.067915   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:47.067970   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:47.098087   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.098099   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:47.098155   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:47.127152   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.127164   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:47.127220   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:47.157028   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.157039   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:47.157096   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:47.186471   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.186483   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:47.186541   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:47.215975   39321 logs.go:274] 0 containers: []
	W0629 11:55:47.215988   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:47.215997   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:47.216004   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:47.256256   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:47.256268   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:47.268708   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:47.268721   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:47.320566   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:47.320577   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:47.320583   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:47.334197   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:47.334209   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:49.391366   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057082304s)
	I0629 11:55:51.893853   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:51.976453   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:52.006330   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.006344   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:52.006418   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:52.036416   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.036428   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:52.036489   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:52.065995   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.066007   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:52.066062   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:52.095567   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.095579   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:52.095639   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:52.125457   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.125470   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:52.125526   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:52.154476   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.154488   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:52.154545   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:52.183063   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.183074   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:52.183133   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:52.212690   39321 logs.go:274] 0 containers: []
	W0629 11:55:52.212702   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:52.212708   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:52.212715   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:52.253322   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:52.253336   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 11:55:52.264898   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:52.264911   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:52.317711   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:52.317722   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:52.317729   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:52.331473   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:52.331486   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:54.387012   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055452409s)
	I0629 11:55:56.889424   39321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:55:56.978656   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 11:55:57.009805   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.009819   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 11:55:57.009887   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 11:55:57.038560   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.038572   39321 logs.go:276] No container was found matching "etcd"
	I0629 11:55:57.038628   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 11:55:57.067167   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.067179   39321 logs.go:276] No container was found matching "coredns"
	I0629 11:55:57.067242   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 11:55:57.095884   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.095896   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 11:55:57.095954   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 11:55:57.125648   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.125660   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 11:55:57.125717   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 11:55:57.157517   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.157531   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 11:55:57.157587   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 11:55:57.190283   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.190296   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 11:55:57.190357   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 11:55:57.221529   39321 logs.go:274] 0 containers: []
	W0629 11:55:57.221543   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 11:55:57.221550   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 11:55:57.221559   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 11:55:57.283015   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 11:55:57.283028   39321 logs.go:123] Gathering logs for Docker ...
	I0629 11:55:57.283037   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 11:55:57.298819   39321 logs.go:123] Gathering logs for container status ...
	I0629 11:55:57.298833   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 11:55:59.359979   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061069192s)
	I0629 11:55:59.360122   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 11:55:59.360130   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 11:55:59.403714   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 11:55:59.403731   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:49:58 UTC, end at Wed 2022-06-29 18:56:05 UTC. --
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.511314510Z" level=info msg="ignoring event" container=907fee6b0ab951dc570b507cfb53082f088d6e46e8f53b523f51806bbe7b6662 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.583997951Z" level=info msg="ignoring event" container=8d483a26327fea368267b8e3556918ffdc27582da76ac2cf4e7c30cf84ea008c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.687870390Z" level=info msg="ignoring event" container=882f6ead5f8649814f45dde882d7bababe2e9ea489a1db1d6341be2af91e0441 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.754239281Z" level=info msg="ignoring event" container=c65e7645bda76e59b22a150b3d65f3c25c956781bf0c9b2228b7a35c00d48463 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.871531414Z" level=info msg="ignoring event" container=f66e80ddbf1fcf48d493edf40bd111a16353bb1369c28fcab3769150326cef4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:29 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:29.938807358Z" level=info msg="ignoring event" container=61ea5e8a5dad91bee9c26f03b2a0dc70191635968e3a4632cf100a9946fbdb5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:30 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:30.029641025Z" level=info msg="ignoring event" container=b9a37d3d69e4feebcc42ec5347326b86471dfa8e5ab53141df52f19e1f6fcc3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:55 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:55.003702864Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:54:55 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:55.003726242Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:54:55 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:55.004991117Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:54:57 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:57.597544824Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 18:54:58 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:58.311396066Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 18:54:59 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:59.179377083Z" level=info msg="ignoring event" container=af49197e52b2c31302999c10d2c306b0dab30a799cb3dd46805f0a0f863d5902 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:54:59 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:54:59.224426852Z" level=info msg="ignoring event" container=923b92c51ad6061167c22b8038f54aa7e3f07db7f2bba7552158eae2d4a0672b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:55:03 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:03.628142970Z" level=info msg="ignoring event" container=a7f95f34f56ed3f6e168fe1beb439bd6ce13bec913fce2309d48a785860e2096 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:55:03 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:03.662088432Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 18:55:04 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:04.279252927Z" level=info msg="ignoring event" container=592a996e048c801c02c22a6c449b60a88f58dbfbdebe6df0acb83d9b78dc8aea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:55:09 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:09.602202019Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:55:09 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:09.602255388Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:55:09 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:09.644169103Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:55:19 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:55:19.882429312Z" level=info msg="ignoring event" container=45257cf5b348193f22418066d665fc1ac8158235b6195ef3672e83d44cfe947b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 18:56:02 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:56:02.123631772Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:56:02 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:56:02.123723544Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:56:02 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:56:02.129456422Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 18:56:03 no-preload-20220629114832-24356 dockerd[488]: time="2022-06-29T18:56:03.548343533Z" level=info msg="ignoring event" container=c5e6c33712a79758f8ebc9fb850783102d18ce2eaa9847f1507b89a3497025f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	c5e6c33712a79       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   e36bb117aeda0
	565d25698c926       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   56 seconds ago       Running             kubernetes-dashboard        0                   ebf59b2d38d52
	18a1e2c19d2b3       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   54bf5bf72c3cd
	11e93671bb6e7       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   227e9f2b6e470
	f2624e6409795       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   afa8fe6012e83
	dcbaad6c52814       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   c5f2433985f1b
	b7d773db9f211       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   c38e44e207e54
	3d1d52e8fbacf       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   71b251c62fffb
	75538b8195286       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   04f22e297be93
	
	* 
	* ==> coredns [11e93671bb6e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/health: Local health request to "http://:8080/health" failed: Get "http://:8080/health": dial tcp :8080: connect: connection refused
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220629114832-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220629114832-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=no-preload-20220629114832-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T11_54_38_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 18:54:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220629114832-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 18:55:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 18:55:58 +0000   Wed, 29 Jun 2022 18:54:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220629114832-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                27a72ab0-3369-43c8-aa5b-98e38866b3a6
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fcqdl                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     73s
	  kube-system                 etcd-no-preload-20220629114832-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-no-preload-20220629114832-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-no-preload-20220629114832-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-7cvpr                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-no-preload-20220629114832-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 metrics-server-5c6f97fb75-8l9bk                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-6dcpk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-qmktl                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 73s   kube-proxy       
	  Normal  Starting                 87s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s   kubelet          Node no-preload-20220629114832-24356 status is now: NodeReady
	  Normal  RegisteredNode           74s   node-controller  Node no-preload-20220629114832-24356 event: Registered Node no-preload-20220629114832-24356 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node no-preload-20220629114832-24356 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [3d1d52e8fbac] <==
	* {"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T18:54:33.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T18:54:33.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T18:54:33.095Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220629114832-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T18:54:33.095Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:54:33.095Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T18:54:33.096Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T18:54:33.096Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T18:54:33.096Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.097Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.097Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T18:54:33.101Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T18:54:33.101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2022-06-29T18:56:03.247Z","caller":"etcdserver/v3_server.go:840","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289941531955962829,"retry-timeout":"500ms"}
	{"level":"info","ts":"2022-06-29T18:56:03.399Z","caller":"traceutil/trace.go:171","msg":"trace[589112611] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:606; }","duration":"652.047028ms","start":"2022-06-29T18:56:02.747Z","end":"2022-06-29T18:56:03.399Z","steps":["trace[589112611] 'read index received'  (duration: 652.042544ms)","trace[589112611] 'applied index is now lower than readState.Index'  (duration: 3.954µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-29T18:56:03.401Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"321.243693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-06-29T18:56:03.401Z","caller":"traceutil/trace.go:171","msg":"trace[13339769] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:573; }","duration":"321.354367ms","start":"2022-06-29T18:56:03.079Z","end":"2022-06-29T18:56:03.401Z","steps":["trace[13339769] 'agreement among raft nodes before linearized reading'  (duration: 319.611792ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T18:56:03.401Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"653.902116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-7cvpr\" ","response":"range_response_count:1 size:4419"}
	{"level":"info","ts":"2022-06-29T18:56:03.401Z","caller":"traceutil/trace.go:171","msg":"trace[89238416] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-7cvpr; range_end:; response_count:1; response_revision:573; }","duration":"653.922562ms","start":"2022-06-29T18:56:02.747Z","end":"2022-06-29T18:56:03.401Z","steps":["trace[89238416] 'agreement among raft nodes before linearized reading'  (duration: 652.148971ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T18:56:03.401Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:56:02.747Z","time spent":"653.947013ms","remote":"127.0.0.1:49250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4443,"request content":"key:\"/registry/pods/kube-system/kube-proxy-7cvpr\" "}
	{"level":"warn","ts":"2022-06-29T18:56:03.401Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:56:03.079Z","time spent":"321.387326ms","remote":"127.0.0.1:49280","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":1,"response size":31,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	{"level":"warn","ts":"2022-06-29T18:56:03.401Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"368.655725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-29T18:56:03.401Z","caller":"traceutil/trace.go:171","msg":"trace[1527201343] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:573; }","duration":"368.88999ms","start":"2022-06-29T18:56:03.032Z","end":"2022-06-29T18:56:03.401Z","steps":["trace[1527201343] 'agreement among raft nodes before linearized reading'  (duration: 366.942779ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-29T18:56:03.401Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-29T18:56:03.032Z","time spent":"369.104882ms","remote":"127.0.0.1:49298","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":29,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true "}
	
	* 
	* ==> kernel <==
	*  18:56:06 up  1:03,  0 users,  load average: 0.46, 0.87, 1.15
	Linux no-preload-20220629114832-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b7d773db9f21] <==
	* I0629 18:54:38.648885       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 18:54:38.728308       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 18:54:51.909926       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0629 18:54:52.009067       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 18:54:52.469667       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 18:54:54.201490       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.102.20.88]
	I0629 18:54:54.473744       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.223.208]
	I0629 18:54:54.482470       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.107.20.235]
	W0629 18:54:55.000271       1 handler_proxy.go:102] no RequestInfo found in the context
	W0629 18:54:55.000367       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 18:54:55.000375       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0629 18:54:55.000387       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 18:54:55.000388       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0629 18:54:55.001482       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 18:55:57.666864       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 18:55:57.666911       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 18:55:57.666925       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 18:55:57.667583       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 18:55:57.667593       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 18:55:57.667933       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0629 18:56:03.403109       1 trace.go:205] Trace[499408279]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-proxy-7cvpr,user-agent:kubelet/v1.24.2 (linux/amd64) kubernetes/f66044f,audit-id:95b033a9-00c4-44fb-b3fb-ed4e89601584,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (29-Jun-2022 18:56:02.746) (total time: 656ms):
	Trace[499408279]: ---"About to write a response" 655ms (18:56:03.402)
	Trace[499408279]: [656.098296ms] [656.098296ms] END
	
	* 
	* ==> kube-controller-manager [dcbaad6c5281] <==
	* I0629 18:54:52.110866       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-mkj7b"
	I0629 18:54:52.112523       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0629 18:54:52.114483       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fcqdl"
	I0629 18:54:52.136657       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-mkj7b"
	I0629 18:54:54.006469       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 18:54:54.073412       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-8l9bk"
	I0629 18:54:54.280630       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 18:54:54.288071       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 18:54:54.295685       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 18:54:54.297310       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0629 18:54:54.298023       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 18:54:54.302860       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 18:54:54.303022       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 18:54:54.303270       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 18:54:54.311315       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 18:54:54.311925       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 18:54:54.312547       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 18:54:54.312567       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 18:54:54.365261       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-qmktl"
	I0629 18:54:54.367348       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-6dcpk"
	W0629 18:55:00.150194       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0629 18:55:21.303238       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 18:55:21.715126       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 18:55:57.913215       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 18:55:57.918204       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f2624e640979] <==
	* I0629 18:54:52.441561       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 18:54:52.441620       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 18:54:52.441660       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 18:54:52.466334       1 server_others.go:206] "Using iptables Proxier"
	I0629 18:54:52.466371       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 18:54:52.466379       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 18:54:52.466388       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 18:54:52.466432       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:54:52.466584       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 18:54:52.466836       1 server.go:661] "Version info" version="v1.24.2"
	I0629 18:54:52.466866       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 18:54:52.467910       1 config.go:317] "Starting service config controller"
	I0629 18:54:52.467941       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 18:54:52.468001       1 config.go:226] "Starting endpoint slice config controller"
	I0629 18:54:52.468024       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 18:54:52.468044       1 config.go:444] "Starting node config controller"
	I0629 18:54:52.468047       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 18:54:52.568469       1 shared_informer.go:262] Caches are synced for node config
	I0629 18:54:52.568497       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 18:54:52.568508       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [75538b819528] <==
	* W0629 18:54:35.763508       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 18:54:35.763760       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 18:54:35.763931       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 18:54:35.763969       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764011       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764024       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 18:54:35.764041       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 18:54:35.764119       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764219       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 18:54:35.764231       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 18:54:35.764298       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 18:54:35.764329       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 18:54:35.764303       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 18:54:35.764340       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 18:54:35.764417       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0629 18:54:35.764479       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0629 18:54:36.619510       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 18:54:36.619723       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 18:54:36.709370       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 18:54:36.709554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 18:54:36.723566       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 18:54:36.723603       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 18:54:36.727912       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 18:54:36.727957       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0629 18:54:37.024685       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:49:58 UTC, end at Wed 2022-06-29 18:56:07 UTC. --
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407439    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddbmp\" (UniqueName: \"kubernetes.io/projected/2716023f-a52f-44c4-858b-ec6667a36b0c-kube-api-access-ddbmp\") pod \"metrics-server-5c6f97fb75-8l9bk\" (UID: \"2716023f-a52f-44c4-858b-ec6667a36b0c\") " pod="kube-system/metrics-server-5c6f97fb75-8l9bk"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407468    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbcd50cd-0663-4e51-b103-e520c8d33ce3-config-volume\") pod \"coredns-6d4b75cb6d-fcqdl\" (UID: \"fbcd50cd-0663-4e51-b103-e520c8d33ce3\") " pod="kube-system/coredns-6d4b75cb6d-fcqdl"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407488    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/686867af-2f46-499f-a6b3-5322753bab16-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-qmktl\" (UID: \"686867af-2f46-499f-a6b3-5322753bab16\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-qmktl"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407509    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e81bb70-d310-485c-bf9e-ffa1f6584c1e-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-6dcpk\" (UID: \"1e81bb70-d310-485c-bf9e-ffa1f6584c1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-6dcpk"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407525    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpcd4\" (UniqueName: \"kubernetes.io/projected/1e81bb70-d310-485c-bf9e-ffa1f6584c1e-kube-api-access-rpcd4\") pod \"dashboard-metrics-scraper-dffd48c4c-6dcpk\" (UID: \"1e81bb70-d310-485c-bf9e-ffa1f6584c1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-6dcpk"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407540    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/470eaa9c-23cf-4ede-ab50-7ed59f41354a-xtables-lock\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407556    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvfw\" (UniqueName: \"kubernetes.io/projected/fbcd50cd-0663-4e51-b103-e520c8d33ce3-kube-api-access-2xvfw\") pod \"coredns-6d4b75cb6d-fcqdl\" (UID: \"fbcd50cd-0663-4e51-b103-e520c8d33ce3\") " pod="kube-system/coredns-6d4b75cb6d-fcqdl"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407585    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5sjn\" (UniqueName: \"kubernetes.io/projected/470eaa9c-23cf-4ede-ab50-7ed59f41354a-kube-api-access-f5sjn\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407613    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42x5\" (UniqueName: \"kubernetes.io/projected/285cc482-2cd9-4283-bc5a-1ef2e61213f8-kube-api-access-g42x5\") pod \"storage-provisioner\" (UID: \"285cc482-2cd9-4283-bc5a-1ef2e61213f8\") " pod="kube-system/storage-provisioner"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407633    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/470eaa9c-23cf-4ede-ab50-7ed59f41354a-lib-modules\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407660    9789 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/470eaa9c-23cf-4ede-ab50-7ed59f41354a-kube-proxy\") pod \"kube-proxy-7cvpr\" (UID: \"470eaa9c-23cf-4ede-ab50-7ed59f41354a\") " pod="kube-system/kube-proxy-7cvpr"
	Jun 29 18:55:59 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:55:59.407672    9789 reconciler.go:157] "Reconciler: start to sync state"
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:56:00.545074    9789 request.go:601] Waited for 1.13194371s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:00.572923    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220629114832-24356\" already exists" pod="kube-system/kube-apiserver-no-preload-20220629114832-24356"
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:00.801974    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220629114832-24356\" already exists" pod="kube-system/etcd-no-preload-20220629114832-24356"
	Jun 29 18:56:00 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:00.948997    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220629114832-24356\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220629114832-24356"
	Jun 29 18:56:01 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:01.216905    9789 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220629114832-24356\" already exists" pod="kube-system/kube-scheduler-no-preload-20220629114832-24356"
	Jun 29 18:56:02 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:02.130192    9789 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 18:56:02 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:02.130247    9789 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 18:56:02 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:02.130388    9789 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ddbmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:
[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-8l9bk_kube-system(2716023f-a52f-44c4-858b-ec6667a36b0c): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 29 18:56:02 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:02.130413    9789 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-8l9bk" podUID=2716023f-a52f-44c4-858b-ec6667a36b0c
	Jun 29 18:56:02 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:56:02.350014    9789 scope.go:110] "RemoveContainer" containerID="45257cf5b348193f22418066d665fc1ac8158235b6195ef3672e83d44cfe947b"
	Jun 29 18:56:04 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:56:04.538289    9789 scope.go:110] "RemoveContainer" containerID="45257cf5b348193f22418066d665fc1ac8158235b6195ef3672e83d44cfe947b"
	Jun 29 18:56:04 no-preload-20220629114832-24356 kubelet[9789]: I0629 18:56:04.538925    9789 scope.go:110] "RemoveContainer" containerID="c5e6c33712a79758f8ebc9fb850783102d18ce2eaa9847f1507b89a3497025f7"
	Jun 29 18:56:04 no-preload-20220629114832-24356 kubelet[9789]: E0629 18:56:04.539125    9789 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-6dcpk_kubernetes-dashboard(1e81bb70-d310-485c-bf9e-ffa1f6584c1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-6dcpk" podUID=1e81bb70-d310-485c-bf9e-ffa1f6584c1e
	
	* 
	* ==> kubernetes-dashboard [565d25698c92] <==
	* 2022/06/29 18:55:09 Using namespace: kubernetes-dashboard
	2022/06/29 18:55:09 Using in-cluster config to connect to apiserver
	2022/06/29 18:55:09 Using secret token for csrf signing
	2022/06/29 18:55:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 18:55:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 18:55:09 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 18:55:09 Generating JWE encryption key
	2022/06/29 18:55:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 18:55:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 18:55:10 Initializing JWE encryption key from synchronized object
	2022/06/29 18:55:10 Creating in-cluster Sidecar client
	2022/06/29 18:55:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 18:55:10 Serving insecurely on HTTP port: 9090
	2022/06/29 18:55:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 18:55:09 Starting overwatch
	
	* 
	* ==> storage-provisioner [18a1e2c19d2b] <==
	* I0629 18:54:55.335760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 18:54:55.343887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 18:54:55.343956       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 18:54:55.350009       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 18:54:55.350109       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b1c298e-b0d1-4b66-82b3-900d6c3a836c", APIVersion:"v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220629114832-24356_7eeea5c0-179a-45b4-bb79-0ab563f6601a became leader
	I0629 18:54:55.350229       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220629114832-24356_7eeea5c0-179a-45b4-bb79-0ab563f6601a!
	I0629 18:54:55.451647       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220629114832-24356_7eeea5c0-179a-45b4-bb79-0ab563f6601a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
E0629 11:56:07.852774   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-8l9bk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 describe pod metrics-server-5c6f97fb75-8l9bk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220629114832-24356 describe pod metrics-server-5c6f97fb75-8l9bk: exit status 1 (273.669617ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-8l9bk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220629114832-24356 describe pod metrics-server-5c6f97fb75-8l9bk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (44.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (576.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0629 12:01:14.374410   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:01:59.519649   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 12:02:00.664972   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:02:10.108404   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 12:02:16.548105   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 12:02:18.315172   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:03:46.682251   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:04:24.477119   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:04:32.674643   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:05:47.037649   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:05:58.641987   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 12:06:07.876091   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:06:14.382807   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:06:22.683130   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:06:31.932491   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:06:59.530728   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 12:07:00.674577   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:07:18.323760   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 12:07:21.720470   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:07:27.530815   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 12:07:37.440517   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:08:22.587654   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:08:41.402279   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:08:46.693261   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:09:24.488101   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:09:32.683600   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:09:59.644646   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:10:08.799601   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (445.79161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-20220629114717-24356" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220629114717-24356
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220629114717-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2",
	        "Created": "2022-06-29T18:47:24.686705454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:53:02.298159951Z",
	            "FinishedAt": "2022-06-29T18:52:59.492186161Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2-json.log",
	        "Name": "/old-k8s-version-20220629114717-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220629114717-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220629114717-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220629114717-24356",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220629114717-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220629114717-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f01a004add6a38bbd2eeef63591d683ecdc0a86e7e09d3f450b9f36251384a44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60321"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60322"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60323"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60324"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60325"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f01a004add6a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220629114717-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1f5e01895cc",
	                        "old-k8s-version-20220629114717-24356"
	                    ],
	                    "NetworkID": "7e2ec4ec0dd8da4d477d55acc03296107258203e7a7a266adf169e3b0ee9c64c",
	                    "EndpointID": "5c3ab2122cf8bbb30617dcaafec5da849a4b6aecffda698851a0bf59a65b2b47",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (1.041594648s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220629114717-24356 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220629114717-24356 logs -n 25: (4.001651366s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:51 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:52 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | disable-driver-mounts-20220629120335-24356        |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:04 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 12:05:24
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 12:05:24.742130   40900 out.go:296] Setting OutFile to fd 1 ...
	I0629 12:05:24.742284   40900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:05:24.742289   40900 out.go:309] Setting ErrFile to fd 2...
	I0629 12:05:24.742293   40900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:05:24.742591   40900 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 12:05:24.742844   40900 out.go:303] Setting JSON to false
	I0629 12:05:24.757723   40900 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11092,"bootTime":1656518432,"procs":372,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 12:05:24.757833   40900 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 12:05:24.779949   40900 out.go:177] * [default-k8s-different-port-20220629120335-24356] minikube v1.26.0 on Darwin 12.4
	I0629 12:05:24.822677   40900 notify.go:193] Checking for updates...
	I0629 12:05:24.843727   40900 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 12:05:24.864447   40900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:05:24.885678   40900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 12:05:24.907000   40900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 12:05:24.928764   40900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 12:05:24.950479   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:05:24.950992   40900 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 12:05:25.019818   40900 docker.go:137] docker version: linux-20.10.16
	I0629 12:05:25.019950   40900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:05:25.141831   40900 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:05:25.07732428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:05:25.163888   40900 out.go:177] * Using the docker driver based on existing profile
	I0629 12:05:25.185202   40900 start.go:284] selected driver: docker
	I0629 12:05:25.185226   40900 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:25.185357   40900 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 12:05:25.188563   40900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:05:25.310870   40900 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:05:25.24659859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:05:25.311015   40900 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 12:05:25.311029   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:25.311037   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:25.311045   40900 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:25.354945   40900 out.go:177] * Starting control plane node default-k8s-different-port-20220629120335-24356 in cluster default-k8s-different-port-20220629120335-24356
	I0629 12:05:25.376387   40900 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 12:05:25.397604   40900 out.go:177] * Pulling base image ...
	I0629 12:05:25.439278   40900 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:05:25.439289   40900 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 12:05:25.439326   40900 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 12:05:25.439338   40900 cache.go:57] Caching tarball of preloaded images
	I0629 12:05:25.439430   40900 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 12:05:25.439443   40900 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 12:05:25.440039   40900 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/config.json ...
	I0629 12:05:25.502774   40900 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 12:05:25.502801   40900 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 12:05:25.502814   40900 cache.go:208] Successfully downloaded all kic artifacts
	I0629 12:05:25.502860   40900 start.go:352] acquiring machines lock for default-k8s-different-port-20220629120335-24356: {Name:mk60bb2ebdcfb729d9b918baeac3e57ffdf371c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 12:05:25.502941   40900 start.go:356] acquired machines lock for "default-k8s-different-port-20220629120335-24356" in 63.513µs
	I0629 12:05:25.502981   40900 start.go:94] Skipping create...Using existing machine configuration
	I0629 12:05:25.502990   40900 fix.go:55] fixHost starting: 
	I0629 12:05:25.503259   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:05:25.570445   40900 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220629120335-24356: state=Stopped err=<nil>
	W0629 12:05:25.570489   40900 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 12:05:25.612862   40900 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220629120335-24356" ...
	I0629 12:05:25.633949   40900 cli_runner.go:164] Run: docker start default-k8s-different-port-20220629120335-24356
	I0629 12:05:25.987798   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:05:26.061121   40900 kic.go:416] container "default-k8s-different-port-20220629120335-24356" state is running.
	I0629 12:05:26.061836   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.139968   40900 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/config.json ...
	I0629 12:05:26.140415   40900 machine.go:88] provisioning docker machine ...
	I0629 12:05:26.140442   40900 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220629120335-24356"
	I0629 12:05:26.140525   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.214964   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:26.215172   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:26.215190   40900 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220629120335-24356 && echo "default-k8s-different-port-20220629120335-24356" | sudo tee /etc/hostname
	I0629 12:05:26.348464   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220629120335-24356
	
	I0629 12:05:26.348558   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.425518   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:26.425668   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:26.425687   40900 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220629120335-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220629120335-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220629120335-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 12:05:26.545918   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:05:26.545942   40900 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 12:05:26.545963   40900 ubuntu.go:177] setting up certificates
	I0629 12:05:26.545973   40900 provision.go:83] configureAuth start
	I0629 12:05:26.546049   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.619306   40900 provision.go:138] copyHostCerts
	I0629 12:05:26.619394   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 12:05:26.619403   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 12:05:26.619490   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 12:05:26.619715   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 12:05:26.619724   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 12:05:26.619781   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 12:05:26.619936   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 12:05:26.619942   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 12:05:26.620000   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 12:05:26.620120   40900 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220629120335-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220629120335-24356]
	I0629 12:05:26.875537   40900 provision.go:172] copyRemoteCerts
	I0629 12:05:26.875603   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 12:05:26.875648   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.946535   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:27.033514   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 12:05:27.051758   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0629 12:05:27.069055   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 12:05:27.086527   40900 provision.go:86] duration metric: configureAuth took 540.524483ms
	I0629 12:05:27.086541   40900 ubuntu.go:193] setting minikube options for container-runtime
	I0629 12:05:27.086686   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:05:27.086764   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.159960   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.160131   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.160142   40900 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 12:05:27.278802   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 12:05:27.278816   40900 ubuntu.go:71] root file system type: overlay
	I0629 12:05:27.278968   40900 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 12:05:27.279043   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.349746   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.349897   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.349945   40900 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 12:05:27.475893   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 12:05:27.475971   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.546989   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.547153   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.547166   40900 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 12:05:27.669428   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:05:27.669447   40900 machine.go:91] provisioned docker machine in 1.528975004s
	I0629 12:05:27.669457   40900 start.go:306] post-start starting for "default-k8s-different-port-20220629120335-24356" (driver="docker")
	I0629 12:05:27.669462   40900 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 12:05:27.669535   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 12:05:27.669581   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.740351   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:27.824385   40900 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 12:05:27.827915   40900 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 12:05:27.827935   40900 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 12:05:27.827942   40900 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 12:05:27.827947   40900 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 12:05:27.827955   40900 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 12:05:27.828087   40900 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 12:05:27.828236   40900 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 12:05:27.828402   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 12:05:27.835575   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:05:27.854776   40900 start.go:309] post-start completed in 185.304144ms
	I0629 12:05:27.854863   40900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 12:05:27.854912   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.926994   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.012302   40900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 12:05:28.016583   40900 fix.go:57] fixHost completed within 2.513517141s
	I0629 12:05:28.016593   40900 start.go:81] releasing machines lock for "default-k8s-different-port-20220629120335-24356", held for 2.513569784s
	I0629 12:05:28.016680   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.088364   40900 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 12:05:28.088365   40900 ssh_runner.go:195] Run: systemctl --version
	I0629 12:05:28.088430   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.088437   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.164662   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.166354   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.248710   40900 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 12:05:28.728545   40900 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 12:05:28.728612   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 12:05:28.740680   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 12:05:28.753053   40900 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 12:05:28.822506   40900 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 12:05:28.886100   40900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:05:28.947818   40900 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 12:05:29.176842   40900 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 12:05:29.240921   40900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:05:29.307948   40900 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 12:05:29.317549   40900 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 12:05:29.317619   40900 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 12:05:29.321834   40900 start.go:468] Will wait 60s for crictl version
	I0629 12:05:29.321886   40900 ssh_runner.go:195] Run: sudo crictl version
	I0629 12:05:29.435634   40900 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 12:05:29.435699   40900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:05:29.470251   40900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:05:29.547597   40900 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 12:05:29.547772   40900 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220629120335-24356 dig +short host.docker.internal
	I0629 12:05:29.681289   40900 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 12:05:29.681400   40900 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 12:05:29.685664   40900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:05:29.695599   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:29.781285   40900 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:05:29.781347   40900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:05:29.812942   40900 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 12:05:29.812959   40900 docker.go:533] Images already preloaded, skipping extraction
	I0629 12:05:29.813043   40900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:05:29.844705   40900 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 12:05:29.844730   40900 cache_images.go:84] Images are preloaded, skipping loading
	I0629 12:05:29.844805   40900 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 12:05:29.916958   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:29.916970   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:29.916983   40900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 12:05:29.916996   40900 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220629120335-24356 NodeName:default-k8s-different-port-20220629120335-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 12:05:29.917102   40900 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220629120335-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 12:05:29.917190   40900 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220629120335-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0629 12:05:29.917247   40900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 12:05:29.924780   40900 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 12:05:29.924831   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 12:05:29.932000   40900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0629 12:05:29.944399   40900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 12:05:29.956598   40900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0629 12:05:29.968949   40900 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 12:05:29.972554   40900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:05:29.981744   40900 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356 for IP: 192.168.67.2
	I0629 12:05:29.981862   40900 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 12:05:29.981909   40900 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 12:05:29.981988   40900 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.key
	I0629 12:05:29.982046   40900 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.key.c7fa3a9e
	I0629 12:05:29.982104   40900 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.key
	I0629 12:05:29.982298   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 12:05:29.982336   40900 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 12:05:29.982348   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 12:05:29.982396   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 12:05:29.982427   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 12:05:29.982457   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 12:05:29.982526   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:05:29.983077   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 12:05:29.999906   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 12:05:30.016302   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 12:05:30.032829   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 12:05:30.049113   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 12:05:30.066680   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 12:05:30.085650   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 12:05:30.104770   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 12:05:30.122336   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 12:05:30.139889   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 12:05:30.156772   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 12:05:30.173073   40900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 12:05:30.185217   40900 ssh_runner.go:195] Run: openssl version
	I0629 12:05:30.190479   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 12:05:30.198324   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.202106   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.202144   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.207062   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 12:05:30.214124   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 12:05:30.221651   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.225365   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.225410   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.230811   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 12:05:30.238146   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 12:05:30.245876   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.249833   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.249872   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.261528   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 12:05:30.271938   40900 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-2435
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:30.272050   40900 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:05:30.300455   40900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 12:05:30.307957   40900 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 12:05:30.307974   40900 kubeadm.go:626] restartCluster start
	I0629 12:05:30.308019   40900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 12:05:30.315073   40900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.315136   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:30.387728   40900 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220629120335-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:05:30.387917   40900 kubeconfig.go:127] "default-k8s-different-port-20220629120335-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 12:05:30.388246   40900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:05:30.389575   40900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 12:05:30.397283   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.397330   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.405451   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.607595   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.607799   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.618280   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.805713   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.805781   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.814896   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.007650   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.007853   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.018553   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.205587   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.205734   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.216423   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.407635   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.407906   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.418511   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.605584   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.605644   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.615628   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.806285   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.806423   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.817288   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.005635   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.005834   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.016849   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.206682   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.206849   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.218451   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.405896   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.406007   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.416979   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.606317   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.606498   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.616827   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.805660   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.805734   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.815566   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.007709   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.007876   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.019040   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.206756   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.206924   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.218107   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.407701   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.407880   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.418775   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.418786   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.418833   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.426759   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.426770   40900 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 12:05:33.426779   40900 kubeadm.go:1092] stopping kube-system containers ...
	I0629 12:05:33.426834   40900 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:05:33.458274   40900 docker.go:434] Stopping containers: [17ccfd6d87bb f1818c465224 c1adcf1be18e cf519054c3a4 9f0b97ca9575 b425c6e78162 b2c6e14c7587 2a7a4e44fd96 d3440e6bd030 f677cfba52c7 9ba118edb0f3 55aed3b8ba56 2667b1e639dc 70e86622f020 855f6856c31f]
	I0629 12:05:33.458347   40900 ssh_runner.go:195] Run: docker stop 17ccfd6d87bb f1818c465224 c1adcf1be18e cf519054c3a4 9f0b97ca9575 b425c6e78162 b2c6e14c7587 2a7a4e44fd96 d3440e6bd030 f677cfba52c7 9ba118edb0f3 55aed3b8ba56 2667b1e639dc 70e86622f020 855f6856c31f
	I0629 12:05:33.489879   40900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 12:05:33.500322   40900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:05:33.507933   40900 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 19:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun 29 19:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jun 29 19:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun 29 19:03 /etc/kubernetes/scheduler.conf
	
	I0629 12:05:33.507980   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0629 12:05:33.515037   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0629 12:05:33.522593   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0629 12:05:33.529626   40900 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.529674   40900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 12:05:33.536295   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0629 12:05:33.543526   40900 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.543573   40900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 12:05:33.550652   40900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:05:33.557856   40900 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 12:05:33.557869   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:33.603386   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.614038   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010601206s)
	I0629 12:05:34.614052   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.784553   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.833543   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.911771   40900 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:05:34.911850   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.421616   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.921384   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.935811   40900 api_server.go:71] duration metric: took 1.024009063s to wait for apiserver process to appear ...
	I0629 12:05:35.935830   40900 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:05:35.935849   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:35.937118   40900 api_server.go:256] stopped: https://127.0.0.1:61604/healthz: Get "https://127.0.0.1:61604/healthz": EOF
	I0629 12:05:36.438094   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:39.455472   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 12:05:39.455492   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 12:05:39.937469   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:39.943847   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:05:39.943858   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:05:40.437422   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:40.444593   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:05:40.444607   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:05:40.937423   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:40.942951   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 200:
	ok
	I0629 12:05:40.949694   40900 api_server.go:140] control plane version: v1.24.2
	I0629 12:05:40.949709   40900 api_server.go:130] duration metric: took 5.0137233s to wait for apiserver health ...
	I0629 12:05:40.949717   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:40.949721   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:40.949730   40900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:05:40.956768   40900 system_pods.go:59] 8 kube-system pods found
	I0629 12:05:40.956784   40900 system_pods.go:61] "coredns-6d4b75cb6d-sr5rq" [6859dc98-d098-4a2f-b3e6-6e5b6225e930] Running
	I0629 12:05:40.956790   40900 system_pods.go:61] "etcd-default-k8s-different-port-20220629120335-24356" [4af024aa-48ac-40b0-b4c8-d05ab73ec465] Running
	I0629 12:05:40.956794   40900 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [bd9308ff-a917-4e0e-9d5c-8192ea128b2f] Running
	I0629 12:05:40.956807   40900 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [5d116566-36ba-4925-973b-c8622702e1e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 12:05:40.956811   40900 system_pods.go:61] "kube-proxy-c4lzs" [9bc1f0bb-d9c3-4809-a4b2-0f750021bad3] Running
	I0629 12:05:40.956834   40900 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [22bd5cf2-dd2c-4cb9-ad4b-8ea4c8d5772f] Running
	I0629 12:05:40.956839   40900 system_pods.go:61] "metrics-server-5c6f97fb75-rfjxz" [a1dcb333-c180-4b6b-8f3f-025a41f001b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:05:40.956843   40900 system_pods.go:61] "storage-provisioner" [5f591cc6-9b0f-4275-89e2-3096f390587d] Running
	I0629 12:05:40.956847   40900 system_pods.go:74] duration metric: took 7.112659ms to wait for pod list to return data ...
	I0629 12:05:40.956853   40900 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:05:40.959478   40900 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:05:40.959495   40900 node_conditions.go:123] node cpu capacity is 6
	I0629 12:05:40.959503   40900 node_conditions.go:105] duration metric: took 2.644447ms to run NodePressure ...
	I0629 12:05:40.959514   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:41.214716   40900 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 12:05:41.219273   40900 kubeadm.go:777] kubelet initialised
	I0629 12:05:41.219284   40900 kubeadm.go:778] duration metric: took 4.549914ms waiting for restarted kubelet to initialise ...
	I0629 12:05:41.219292   40900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:05:41.225780   40900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.231094   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.231106   40900 pod_ready.go:81] duration metric: took 5.312518ms waiting for pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.231116   40900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.238011   40900 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.238021   40900 pod_ready.go:81] duration metric: took 6.900167ms waiting for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.238028   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.243816   40900 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.243825   40900 pod_ready.go:81] duration metric: took 5.792024ms waiting for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.243832   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:43.362002   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:45.858402   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:47.859472   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:49.862061   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:51.859532   40900 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.859545   40900 pod_ready.go:81] duration metric: took 10.615389832s waiting for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.859553   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4lzs" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.864514   40900 pod_ready.go:92] pod "kube-proxy-c4lzs" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.864523   40900 pod_ready.go:81] duration metric: took 4.966121ms waiting for pod "kube-proxy-c4lzs" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.864529   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.870041   40900 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.870052   40900 pod_ready.go:81] duration metric: took 5.516262ms waiting for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.870058   40900 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:53.883004   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:55.884160   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:58.383036   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:00.384561   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:02.882051   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:04.884520   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:07.383533   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:09.882797   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:11.882979   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:13.883312   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:15.883735   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:18.385501   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:20.883564   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:22.886763   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:25.383709   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:27.386276   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:29.885692   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:32.384164   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:34.883309   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:36.884800   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:39.384855   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:41.884577   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:44.384450   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:46.885968   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:49.384678   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:51.386004   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:53.886429   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:56.384509   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:58.386257   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:00.386604   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:02.885075   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:05.385265   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:07.386268   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:09.886384   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:12.385466   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:14.887248   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:17.385034   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:19.385266   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:21.886143   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:23.886397   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:25.887289   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:28.387746   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:30.890336   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:33.385686   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:35.387141   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:37.387612   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:39.885855   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:42.386043   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:44.387585   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:46.890258   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:49.388039   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:51.884165   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:53.885975   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:55.887335   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:57.888082   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:00.387867   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:02.885883   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:04.887741   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:07.386962   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:09.887038   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:11.888284   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:14.386729   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:16.388752   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:18.889167   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:21.388569   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:23.389000   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:25.889318   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:28.387156   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:30.887591   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:32.888038   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:35.387189   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:37.388954   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:39.888736   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:42.387770   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:44.388231   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:46.388865   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:48.390054   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:50.887077   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:52.889796   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:55.387603   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:57.389156   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:59.390067   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:01.888280   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:03.890615   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:06.388810   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:08.395053   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:10.891022   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:13.387671   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:15.389123   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:17.389657   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:19.891053   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:22.390598   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:24.888414   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:26.889444   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:28.890985   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:31.389168   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:33.391212   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:35.888935   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:37.889955   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:40.387878   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:42.391117   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:44.887624   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:46.888329   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:48.892489   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:51.390289   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:51.884754   40900 pod_ready.go:81] duration metric: took 4m0.007433392s waiting for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" ...
	E0629 12:09:51.884779   40900 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 12:09:51.884801   40900 pod_ready.go:38] duration metric: took 4m10.657980757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:09:51.884847   40900 kubeadm.go:630] restartCluster took 4m21.569015743s
	W0629 12:09:51.884974   40900 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 12:09:51.885001   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0629 12:09:54.340631   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.455542748s)
	I0629 12:09:54.340693   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:09:54.350928   40900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:09:54.358196   40900 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 12:09:54.358240   40900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:09:54.365645   40900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 12:09:54.365669   40900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 12:09:54.644180   40900 out.go:204]   - Generating certificates and keys ...
	I0629 12:09:55.436699   40900 out.go:204]   - Booting up control plane ...
	I0629 12:10:02.007426   40900 out.go:204]   - Configuring RBAC rules ...
	I0629 12:10:02.381881   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:10:02.381896   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:10:02.381926   40900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:10:02.382004   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:02.382007   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=default-k8s-different-port-20220629120335-24356 minikube.k8s.io/updated_at=2022_06_29T12_10_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:02.398555   40900 ops.go:34] apiserver oom_adj: -16
	I0629 12:10:02.524549   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:03.081788   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:03.580947   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:04.082906   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:04.581016   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:05.080952   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:05.582778   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:06.082461   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:06.581135   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:07.081462   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:07.580952   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:08.083116   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:08.582944   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:09.081159   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:09.583028   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:10.081502   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:10.583083   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:11.082047   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:11.581902   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:12.080935   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:12.581027   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:13.081091   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:13.581484   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:14.081976   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:14.581567   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:15.081419   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:15.581169   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.081215   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.581098   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.636385   40900 kubeadm.go:1045] duration metric: took 14.25401703s to wait for elevateKubeSystemPrivileges.
	I0629 12:10:16.636403   40900 kubeadm.go:397] StartCluster complete in 4m46.355879997s
	I0629 12:10:16.636421   40900 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:10:16.636502   40900 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:10:16.637057   40900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:10:17.154534   40900 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220629120335-24356" rescaled to 1
	I0629 12:10:17.154581   40900 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:10:17.154592   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:10:17.154635   40900 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:10:17.179168   40900 out.go:177] * Verifying Kubernetes components...
	I0629 12:10:17.154816   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:10:17.179227   40900 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179238   40900 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179242   40900 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179244   40900 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.251996   40900 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.252003   40900 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220629120335-24356"
	W0629 12:10:17.252026   40900 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:10:17.252026   40900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.252032   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0629 12:10:17.252012   40900 addons.go:162] addon metrics-server should already be in state true
	I0629 12:10:17.252011   40900 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220629120335-24356"
	W0629 12:10:17.252073   40900 addons.go:162] addon dashboard should already be in state true
	I0629 12:10:17.252075   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252094   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252113   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252342   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.252474   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.253292   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.253467   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.264182   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 12:10:17.276210   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.405718   40900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:10:17.415915   40900 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.433419   40900 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220629120335-24356" to be "Ready" ...
	I0629 12:10:17.443055   40900 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:10:17.464058   40900 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	W0629 12:10:17.484802   40900 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:10:17.506219   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.484810   40900 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0629 12:10:17.484823   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:10:17.506850   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.520608   40900 node_ready.go:49] node "default-k8s-different-port-20220629120335-24356" has status "Ready":"True"
	I0629 12:10:17.527049   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:10:17.527075   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.563798   40900 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:10:17.563870   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:10:17.563872   40900 node_ready.go:38] duration metric: took 79.048397ms waiting for node "default-k8s-different-port-20220629120335-24356" to be "Ready" ...
	I0629 12:10:17.585134   40900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:10:17.585184   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:10:17.585206   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:10:17.585291   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.585296   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.593199   40900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:17.665696   40900 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:10:17.665711   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:10:17.665787   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.670001   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.696870   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.700767   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.759343   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.835815   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:10:17.835838   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:10:17.837925   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:10:17.837935   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:10:17.850995   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:10:17.922795   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:10:17.922813   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:10:17.933868   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:10:17.933891   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:10:17.949816   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:10:17.949837   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:10:18.025643   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:10:18.025663   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:10:18.040174   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:10:18.040187   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:10:18.053606   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:10:18.116478   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:10:18.137318   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:10:18.137344   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:10:18.240726   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:10:18.240742   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:10:18.319690   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:10:18.319710   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:10:18.344325   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:10:18.344337   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:10:18.358646   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:10:18.358658   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:10:18.373107   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:10:18.639163   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.374898461s)
	I0629 12:10:18.639189   40900 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 12:10:18.848043   40900 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:19.169072   40900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0629 12:10:19.227078   40900 addons.go:414] enableAddons completed in 2.072399823s
	I0629 12:10:19.630154   40900 pod_ready.go:102] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"False"
	I0629 12:10:22.129155   40900 pod_ready.go:102] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"False"
	I0629 12:10:22.628751   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.628765   40900 pod_ready.go:81] duration metric: took 5.035392246s waiting for pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.628773   40900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.633109   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.633116   40900 pod_ready.go:81] duration metric: took 4.337728ms waiting for pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.633122   40900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.637139   40900 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.637148   40900 pod_ready.go:81] duration metric: took 4.019768ms waiting for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.637154   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.641938   40900 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.641946   40900 pod_ready.go:81] duration metric: took 4.786805ms waiting for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.641954   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.646093   40900 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.646102   40900 pod_ready.go:81] duration metric: took 4.142515ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.646108   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42mtt" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.025736   40900 pod_ready.go:92] pod "kube-proxy-42mtt" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:23.025745   40900 pod_ready.go:81] duration metric: took 379.621193ms waiting for pod "kube-proxy-42mtt" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.025752   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.425527   40900 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:23.425537   40900 pod_ready.go:81] duration metric: took 399.769149ms waiting for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.425543   40900 pod_ready.go:38] duration metric: took 5.840170789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:10:23.425556   40900 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:10:23.425608   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:10:23.439147   40900 api_server.go:71] duration metric: took 6.284351507s to wait for apiserver process to appear ...
	I0629 12:10:23.439159   40900 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:10:23.439165   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:10:23.445058   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 200:
	ok
	I0629 12:10:23.446503   40900 api_server.go:140] control plane version: v1.24.2
	I0629 12:10:23.446513   40900 api_server.go:130] duration metric: took 7.350129ms to wait for apiserver health ...
	I0629 12:10:23.446519   40900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:10:23.632422   40900 system_pods.go:59] 9 kube-system pods found
	I0629 12:10:23.632439   40900 system_pods.go:61] "coredns-6d4b75cb6d-54rws" [60c259ab-57b4-463a-b089-fccaa6d3f6c0] Running
	I0629 12:10:23.632443   40900 system_pods.go:61] "coredns-6d4b75cb6d-vf8rl" [238d3a6f-05f7-4855-85b5-0d07b08f9074] Running
	I0629 12:10:23.632462   40900 system_pods.go:61] "etcd-default-k8s-different-port-20220629120335-24356" [2ed40fc5-8a2c-4005-88a8-162bf7f5db1f] Running
	I0629 12:10:23.632466   40900 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [9b870f1e-f6ca-4bef-91f3-9d2de9de0aec] Running
	I0629 12:10:23.632490   40900 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [8cf4752e-ce9b-4b30-8d53-5f06bac5f6a1] Running
	I0629 12:10:23.632493   40900 system_pods.go:61] "kube-proxy-42mtt" [322de8c5-d47e-4bb0-9d7d-ef640626c70c] Running
	I0629 12:10:23.632500   40900 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [c257d0fd-43d0-40eb-b9d1-0f1d4747a0ae] Running
	I0629 12:10:23.632505   40900 system_pods.go:61] "metrics-server-5c6f97fb75-smdz9" [2661f4fb-d410-4b0b-9abe-0c030e00d8b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:10:23.632511   40900 system_pods.go:61] "storage-provisioner" [bc59072d-a402-4441-ace1-1ade0e3b7e2f] Running
	I0629 12:10:23.632516   40900 system_pods.go:74] duration metric: took 185.971139ms to wait for pod list to return data ...
	I0629 12:10:23.632520   40900 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:10:23.825634   40900 default_sa.go:45] found service account: "default"
	I0629 12:10:23.825650   40900 default_sa.go:55] duration metric: took 193.118786ms for default service account to be created ...
	I0629 12:10:23.825658   40900 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 12:10:24.028758   40900 system_pods.go:86] 9 kube-system pods found
	I0629 12:10:24.028773   40900 system_pods.go:89] "coredns-6d4b75cb6d-54rws" [60c259ab-57b4-463a-b089-fccaa6d3f6c0] Running
	I0629 12:10:24.028778   40900 system_pods.go:89] "coredns-6d4b75cb6d-vf8rl" [238d3a6f-05f7-4855-85b5-0d07b08f9074] Running
	I0629 12:10:24.028781   40900 system_pods.go:89] "etcd-default-k8s-different-port-20220629120335-24356" [2ed40fc5-8a2c-4005-88a8-162bf7f5db1f] Running
	I0629 12:10:24.028785   40900 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [9b870f1e-f6ca-4bef-91f3-9d2de9de0aec] Running
	I0629 12:10:24.028789   40900 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [8cf4752e-ce9b-4b30-8d53-5f06bac5f6a1] Running
	I0629 12:10:24.028792   40900 system_pods.go:89] "kube-proxy-42mtt" [322de8c5-d47e-4bb0-9d7d-ef640626c70c] Running
	I0629 12:10:24.028795   40900 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [c257d0fd-43d0-40eb-b9d1-0f1d4747a0ae] Running
	I0629 12:10:24.028803   40900 system_pods.go:89] "metrics-server-5c6f97fb75-smdz9" [2661f4fb-d410-4b0b-9abe-0c030e00d8b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:10:24.028807   40900 system_pods.go:89] "storage-provisioner" [bc59072d-a402-4441-ace1-1ade0e3b7e2f] Running
	I0629 12:10:24.028813   40900 system_pods.go:126] duration metric: took 203.144154ms to wait for k8s-apps to be running ...
	I0629 12:10:24.028818   40900 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 12:10:24.028868   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:10:24.039499   40900 system_svc.go:56] duration metric: took 10.670289ms WaitForService to wait for kubelet.
	I0629 12:10:24.039512   40900 kubeadm.go:572] duration metric: took 6.88470241s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 12:10:24.039525   40900 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:10:24.226241   40900 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:10:24.226255   40900 node_conditions.go:123] node cpu capacity is 6
	I0629 12:10:24.226262   40900 node_conditions.go:105] duration metric: took 186.72858ms to run NodePressure ...
	I0629 12:10:24.226270   40900 start.go:213] waiting for startup goroutines ...
	I0629 12:10:24.261002   40900 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:10:24.304930   40900 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220629120335-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:53:02 UTC, end at Wed 2022-06-29 19:10:46 UTC. --
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Stopping Docker Application Container Engine...
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.216575736Z" level=info msg="Processing signal 'terminated'"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.217825930Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.218386582Z" level=info msg="Daemon shutdown complete"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: docker.service: Succeeded.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Stopped Docker Application Container Engine.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Starting Docker Application Container Engine...
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.272004427Z" level=info msg="Starting up"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273752497Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273789659Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273812919Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273823680Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.274963883Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275024151Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275067758Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275110265Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.278499483Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.281321453Z" level=info msg="Loading containers: start."
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.354206270Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.383916961Z" level=info msg="Loading containers: done."
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.391706828Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.391760406Z" level=info msg="Daemon has completed initialization"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Started Docker Application Container Engine.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.417864571Z" level=info msg="API listen on [::]:2376"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.420446680Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-29T19:10:48Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:10:48 up  1:18,  0 users,  load average: 0.66, 0.91, 1.16
	Linux old-k8s-version-20220629114717-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:53:02 UTC, end at Wed 2022-06-29 19:10:49 UTC. --
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 kubelet[24482]: I0629 19:10:47.930668   24482 server.go:410] Version: v1.16.0
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 kubelet[24482]: I0629 19:10:47.930930   24482 plugins.go:100] No cloud provider specified.
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 kubelet[24482]: I0629 19:10:47.930944   24482 server.go:773] Client rotation is on, will bootstrap in background
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 kubelet[24482]: I0629 19:10:47.932956   24482 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 kubelet[24482]: W0629 19:10:47.933701   24482 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 kubelet[24482]: W0629 19:10:47.933765   24482 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 kubelet[24482]: F0629 19:10:47.933788   24482 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 29 19:10:47 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 931.
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 kubelet[24502]: I0629 19:10:48.677975   24502 server.go:410] Version: v1.16.0
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 kubelet[24502]: I0629 19:10:48.678476   24502 plugins.go:100] No cloud provider specified.
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 kubelet[24502]: I0629 19:10:48.678509   24502 server.go:773] Client rotation is on, will bootstrap in background
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 kubelet[24502]: I0629 19:10:48.680236   24502 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 kubelet[24502]: W0629 19:10:48.681471   24502 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 kubelet[24502]: W0629 19:10:48.681621   24502 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 kubelet[24502]: F0629 19:10:48.681678   24502 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 29 19:10:48 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 12:10:48.735502   41309 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (446.604991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220629114717-24356" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (576.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220629115611-24356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356: exit status 2 (16.101897981s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356: exit status 2 (16.108887324s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220629115611-24356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220629115611-24356
helpers_test.go:235: (dbg) docker inspect embed-certs-20220629115611-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83",
	        "Created": "2022-06-29T18:56:18.782942519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 267710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:57:27.28926741Z",
	            "FinishedAt": "2022-06-29T18:57:25.331741028Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/hostname",
	        "HostsPath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/hosts",
	        "LogPath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83-json.log",
	        "Name": "/embed-certs-20220629115611-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220629115611-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220629115611-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220629115611-24356",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220629115611-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220629115611-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220629115611-24356",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220629115611-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3532b4634d6cee6d2a3c955d4512246775f2b9b5ecf455de20e03773d8343824",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60811"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60812"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60815"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3532b4634d6c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220629115611-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3865641b000a",
	                        "embed-certs-20220629115611-24356"
	                    ],
	                    "NetworkID": "789a63df411698d553bc4fa5ef7a823a36c8c59abd40f53ac9c3c90a49d15914",
	                    "EndpointID": "7c491b8aa76b8c8f205c04c8e63195f0cc7c6f91ded822eb6800f496aad24108",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
E0629 12:03:23.718463   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220629115611-24356 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220629115611-24356 logs -n 25: (2.957457612s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:48 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:54 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:51 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:52 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 11:57:26
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 11:57:26.028245   39984 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:57:26.028421   39984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:57:26.028426   39984 out.go:309] Setting ErrFile to fd 2...
	I0629 11:57:26.028430   39984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:57:26.028744   39984 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:57:26.029007   39984 out.go:303] Setting JSON to false
	I0629 11:57:26.044844   39984 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10614,"bootTime":1656518432,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:57:26.044930   39984 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:57:26.071215   39984 out.go:177] * [embed-certs-20220629115611-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:57:26.114439   39984 notify.go:193] Checking for updates...
	I0629 11:57:26.136279   39984 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:57:26.158396   39984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:57:26.180197   39984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:57:26.201576   39984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:57:26.223504   39984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:57:26.245798   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:57:26.246444   39984 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:57:26.316909   39984 docker.go:137] docker version: linux-20.10.16
	I0629 11:57:26.317080   39984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:57:26.446690   39984 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:57:26.381567768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:57:26.468611   39984 out.go:177] * Using the docker driver based on existing profile
	I0629 11:57:26.489667   39984 start.go:284] selected driver: docker
	I0629 11:57:26.489698   39984 start.go:808] validating driver "docker" against &{Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:26.489832   39984 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:57:26.493277   39984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:57:26.615477   39984 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:57:26.552906823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:57:26.615651   39984 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:57:26.615666   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:26.615676   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:26.615683   39984 start_flags.go:310] config:
	{Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:26.659812   39984 out.go:177] * Starting control plane node embed-certs-20220629115611-24356 in cluster embed-certs-20220629115611-24356
	I0629 11:57:26.681749   39984 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:57:26.703472   39984 out.go:177] * Pulling base image ...
	I0629 11:57:26.745579   39984 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:57:26.745590   39984 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:57:26.745645   39984 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 11:57:26.745663   39984 cache.go:57] Caching tarball of preloaded images
	I0629 11:57:26.745789   39984 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:57:26.745807   39984 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 11:57:26.746584   39984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/config.json ...
	I0629 11:57:26.809113   39984 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:57:26.809128   39984 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:57:26.809140   39984 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:57:26.809200   39984 start.go:352] acquiring machines lock for embed-certs-20220629115611-24356: {Name:mk0bdb566e64e1b997b63c331e0b76362860de65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:57:26.809294   39984 start.go:356] acquired machines lock for "embed-certs-20220629115611-24356" in 67.417µs
	I0629 11:57:26.809317   39984 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:57:26.809326   39984 fix.go:55] fixHost starting: 
	I0629 11:57:26.809545   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 11:57:26.877064   39984 fix.go:103] recreateIfNeeded on embed-certs-20220629115611-24356: state=Stopped err=<nil>
	W0629 11:57:26.877097   39984 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:57:26.921097   39984 out.go:177] * Restarting existing docker container for "embed-certs-20220629115611-24356" ...
	I0629 11:57:26.943046   39984 cli_runner.go:164] Run: docker start embed-certs-20220629115611-24356
	I0629 11:57:27.298057   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 11:57:27.370883   39984 kic.go:416] container "embed-certs-20220629115611-24356" state is running.
	I0629 11:57:27.371467   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:27.450035   39984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/config.json ...
	I0629 11:57:27.450491   39984 machine.go:88] provisioning docker machine ...
	I0629 11:57:27.450523   39984 ubuntu.go:169] provisioning hostname "embed-certs-20220629115611-24356"
	I0629 11:57:27.450615   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:27.526657   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:27.526849   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:27.526862   39984 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220629115611-24356 && echo "embed-certs-20220629115611-24356" | sudo tee /etc/hostname
	I0629 11:57:27.655714   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220629115611-24356
	
	I0629 11:57:27.655798   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:27.730765   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:27.730938   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:27.730953   39984 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220629115611-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220629115611-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220629115611-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:57:27.848950   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:57:27.848968   39984 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:57:27.848989   39984 ubuntu.go:177] setting up certificates
	I0629 11:57:27.848996   39984 provision.go:83] configureAuth start
	I0629 11:57:27.849084   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:27.929959   39984 provision.go:138] copyHostCerts
	I0629 11:57:27.930123   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:57:27.930147   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:57:27.930263   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:57:27.930508   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:57:27.930517   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:57:27.930576   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:57:27.930756   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:57:27.930764   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:57:27.930836   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:57:27.930964   39984 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220629115611-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220629115611-24356]
	I0629 11:57:27.999428   39984 provision.go:172] copyRemoteCerts
	I0629 11:57:27.999495   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:57:27.999547   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.073332   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:28.161829   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:57:28.180214   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0629 11:57:28.196728   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 11:57:28.213826   39984 provision.go:86] duration metric: configureAuth took 364.804405ms
	I0629 11:57:28.213840   39984 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:57:28.214049   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:57:28.214114   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.285550   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.285697   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.285709   39984 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:57:28.404316   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:57:28.404329   39984 ubuntu.go:71] root file system type: overlay
	I0629 11:57:28.404488   39984 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:57:28.404565   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.475355   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.475494   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.475543   39984 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:57:28.601145   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:57:28.601241   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.672126   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.672296   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.672310   39984 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:57:28.795931   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:57:28.795946   39984 machine.go:91] provisioned docker machine in 1.345405346s
	I0629 11:57:28.795961   39984 start.go:306] post-start starting for "embed-certs-20220629115611-24356" (driver="docker")
	I0629 11:57:28.795968   39984 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:57:28.796037   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:57:28.796087   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.866293   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:28.951759   39984 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:57:28.955285   39984 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:57:28.955300   39984 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:57:28.955307   39984 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:57:28.955312   39984 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:57:28.955321   39984 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:57:28.955430   39984 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:57:28.955566   39984 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:57:28.955718   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:57:28.962930   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:57:28.979721   39984 start.go:309] post-start completed in 183.73758ms
	I0629 11:57:28.979798   39984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:57:28.979853   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.052656   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.137653   39984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:57:29.142085   39984 fix.go:57] fixHost completed within 2.332689804s
	I0629 11:57:29.142096   39984 start.go:81] releasing machines lock for "embed-certs-20220629115611-24356", held for 2.332724366s
	I0629 11:57:29.142164   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:29.211897   39984 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:57:29.211897   39984 ssh_runner.go:195] Run: systemctl --version
	I0629 11:57:29.211957   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.211969   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.288098   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.290800   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.373189   39984 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:57:29.857399   39984 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:57:29.857467   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:57:29.869954   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:57:29.883131   39984 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:57:29.955029   39984 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:57:30.019548   39984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:57:30.090812   39984 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:57:30.329132   39984 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 11:57:30.399299   39984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:57:30.472742   39984 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 11:57:30.482620   39984 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 11:57:30.482690   39984 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 11:57:30.486666   39984 start.go:468] Will wait 60s for crictl version
	I0629 11:57:30.486722   39984 ssh_runner.go:195] Run: sudo crictl version
	I0629 11:57:30.587073   39984 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 11:57:30.587149   39984 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:57:30.622161   39984 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:57:30.700040   39984 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 11:57:30.700166   39984 cli_runner.go:164] Run: docker exec -t embed-certs-20220629115611-24356 dig +short host.docker.internal
	I0629 11:57:30.827612   39984 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:57:30.827718   39984 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:57:30.831832   39984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:57:30.841288   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:30.913390   39984 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:57:30.913460   39984 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:57:30.944383   39984 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 11:57:30.944399   39984 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:57:30.944478   39984 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:57:30.975315   39984 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 11:57:30.975343   39984 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:57:30.975415   39984 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:57:31.045851   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:31.050165   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:31.050195   39984 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:57:31.050222   39984 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220629115611-24356 NodeName:embed-certs-20220629115611-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:57:31.050404   39984 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220629115611-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:57:31.050551   39984 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220629115611-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:57:31.050644   39984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 11:57:31.059402   39984 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:57:31.059454   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:57:31.066631   39984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0629 11:57:31.079513   39984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:57:31.092419   39984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0629 11:57:31.105233   39984 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:57:31.108958   39984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:57:31.118325   39984 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356 for IP: 192.168.67.2
	I0629 11:57:31.118436   39984 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:57:31.118497   39984 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:57:31.118573   39984 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/client.key
	I0629 11:57:31.118636   39984 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.key.c7fa3a9e
	I0629 11:57:31.118686   39984 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.key
	I0629 11:57:31.118892   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:57:31.118931   39984 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:57:31.118944   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:57:31.118978   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:57:31.119010   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:57:31.119037   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:57:31.119098   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:57:31.119668   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:57:31.136862   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 11:57:31.153564   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:57:31.170777   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:57:31.187816   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:57:31.204573   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:57:31.221464   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:57:31.239026   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:57:31.255730   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:57:31.272688   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:57:31.289538   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:57:31.306720   39984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:57:31.319465   39984 ssh_runner.go:195] Run: openssl version
	I0629 11:57:31.324535   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:57:31.332540   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.336652   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.336698   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.342301   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:57:31.349622   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:57:31.357282   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.361696   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.361747   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.366990   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:57:31.374502   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:57:31.382218   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.385803   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.385848   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.390826   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:57:31.397764   39984 kubeadm.go:395] StartCluster: {Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:31.397873   39984 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:57:31.427173   39984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:57:31.434832   39984 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:57:31.434846   39984 kubeadm.go:626] restartCluster start
	I0629 11:57:31.434897   39984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:57:31.441586   39984 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.441651   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:31.513483   39984 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220629115611-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:57:31.513643   39984 kubeconfig.go:127] "embed-certs-20220629115611-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 11:57:31.513999   39984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:57:31.515316   39984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:57:31.530420   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.530480   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.538594   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.738692   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.738802   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.747924   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.940764   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.940962   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.953388   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.138925   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.139021   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.150641   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.339007   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.339144   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.350071   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.538785   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.538883   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.549429   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.740773   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.740914   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.751283   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.940779   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.940965   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.952319   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.139151   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.139215   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.149931   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.338763   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.338882   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.347730   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.540825   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.540989   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.551698   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.739521   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.739687   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.750188   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.939155   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.939254   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.949817   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.140162   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.140353   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.150863   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.340139   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.340257   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.351094   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.540169   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.540353   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.551334   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.551344   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.551403   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.559886   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.559897   39984 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 11:57:34.559905   39984 kubeadm.go:1092] stopping kube-system containers ...
	I0629 11:57:34.559958   39984 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:57:34.590002   39984 docker.go:434] Stopping containers: [666dcbf78fe0 ddb4a3ba17a8 6b729b461ef0 b814135cd0a1 e13a428052eb 0dd4b988196b fae1c540c6c3 4d48afea68d9 196dbfd07a20 439d99c75b27 cc212149d36c 984a7e540bed 80e09584f648 9db02521aa04 3369302f8f17 d66a49ab53be]
	I0629 11:57:34.590078   39984 ssh_runner.go:195] Run: docker stop 666dcbf78fe0 ddb4a3ba17a8 6b729b461ef0 b814135cd0a1 e13a428052eb 0dd4b988196b fae1c540c6c3 4d48afea68d9 196dbfd07a20 439d99c75b27 cc212149d36c 984a7e540bed 80e09584f648 9db02521aa04 3369302f8f17 d66a49ab53be
	I0629 11:57:34.622333   39984 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 11:57:34.633894   39984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:57:34.642013   39984 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 18:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 29 18:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jun 29 18:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 29 18:56 /etc/kubernetes/scheduler.conf
	
	I0629 11:57:34.642067   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 11:57:34.650335   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 11:57:34.658274   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 11:57:34.666006   39984 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.666067   39984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 11:57:34.674854   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 11:57:34.682511   39984 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.682565   39984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 11:57:34.689948   39984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:57:34.697944   39984 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 11:57:34.697960   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:34.743910   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.702128   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.884195   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.931141   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.978909   39984 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:57:35.978974   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:36.489509   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:36.991297   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:37.491468   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:37.539412   39984 api_server.go:71] duration metric: took 1.560450953s to wait for apiserver process to appear ...
	I0629 11:57:37.539430   39984 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:57:37.539444   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:40.290730   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 11:57:40.290748   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:57:40.792942   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:40.800561   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:57:40.800574   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:57:41.291032   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:41.296338   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:57:41.296358   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:57:41.791011   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:41.797671   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 200:
	ok
	I0629 11:57:41.804473   39984 api_server.go:140] control plane version: v1.24.2
	I0629 11:57:41.804485   39984 api_server.go:130] duration metric: took 4.264923117s to wait for apiserver health ...
	I0629 11:57:41.804492   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:41.804502   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:41.804513   39984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:57:41.832519   39984 system_pods.go:59] 8 kube-system pods found
	I0629 11:57:41.832535   39984 system_pods.go:61] "coredns-6d4b75cb6d-pnzfc" [d1c86d77-1548-4a2f-b9c7-42b4bf4a6a3d] Running
	I0629 11:57:41.832541   39984 system_pods.go:61] "etcd-embed-certs-20220629115611-24356" [d91824a5-2512-44b7-82ef-0fa1347aaabf] Running
	I0629 11:57:41.832547   39984 system_pods.go:61] "kube-apiserver-embed-certs-20220629115611-24356" [da634837-5c4e-4f9f-9a67-2cc008c0440b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 11:57:41.832553   39984 system_pods.go:61] "kube-controller-manager-embed-certs-20220629115611-24356" [52be6bd2-1731-4717-bc8a-e66fd7626c22] Running
	I0629 11:57:41.832556   39984 system_pods.go:61] "kube-proxy-pcxgq" [27e07fcd-c6b6-438e-a098-a226b21b33e1] Running
	I0629 11:57:41.832561   39984 system_pods.go:61] "kube-scheduler-embed-certs-20220629115611-24356" [09df9d02-46aa-44bc-afe4-b16bcd31afd0] Running
	I0629 11:57:41.832566   39984 system_pods.go:61] "metrics-server-5c6f97fb75-rxdvx" [f03ad7f1-c31c-4563-a988-6b36ea877e9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 11:57:41.832573   39984 system_pods.go:61] "storage-provisioner" [941d4d53-8827-455c-bf13-eccd87cfbfe5] Running
	I0629 11:57:41.832577   39984 system_pods.go:74] duration metric: took 28.058937ms to wait for pod list to return data ...
	I0629 11:57:41.832583   39984 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:57:41.835565   39984 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:57:41.835583   39984 node_conditions.go:123] node cpu capacity is 6
	I0629 11:57:41.835591   39984 node_conditions.go:105] duration metric: took 3.005124ms to run NodePressure ...
	I0629 11:57:41.835602   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:42.037431   39984 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 11:57:42.043980   39984 kubeadm.go:777] kubelet initialised
	I0629 11:57:42.043992   39984 kubeadm.go:778] duration metric: took 6.540999ms waiting for restarted kubelet to initialise ...
	I0629 11:57:42.044000   39984 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:57:42.050820   39984 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.056213   39984 pod_ready.go:92] pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:42.056222   39984 pod_ready.go:81] duration metric: took 5.36795ms waiting for pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.056229   39984 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.061951   39984 pod_ready.go:92] pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:42.061961   39984 pod_ready.go:81] duration metric: took 5.728041ms waiting for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.061968   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:44.073865   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:46.077904   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:48.576009   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:51.075775   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:53.075358   39984 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.075371   39984 pod_ready.go:81] duration metric: took 11.01306776s waiting for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.075377   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.079816   39984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.079824   39984 pod_ready.go:81] duration metric: took 4.442048ms waiting for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.079829   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pcxgq" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.084576   39984 pod_ready.go:92] pod "kube-proxy-pcxgq" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.084583   39984 pod_ready.go:81] duration metric: took 4.749511ms waiting for pod "kube-proxy-pcxgq" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.084589   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.088625   39984 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.088632   39984 pod_ready.go:81] duration metric: took 4.039623ms waiting for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.088640   39984 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:55.097461   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:57.100786   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:59.601286   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:02.099451   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:04.600718   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:07.101221   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:09.600874   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:12.099278   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:14.601619   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:17.101075   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:19.102702   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:21.600733   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:24.099200   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:26.102268   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:28.599567   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:30.599655   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:32.599970   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:35.101359   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:37.600978   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:40.101820   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:42.601212   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:45.099127   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:47.100293   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:49.101795   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:51.600853   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:54.099798   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:56.102348   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:58.599972   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:00.602127   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:03.099999   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:05.602102   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	W0629 11:59:09.269281   39321 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:59:09.269312   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:59:09.691823   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:59:09.701755   39321 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:59:09.701805   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:59:09.709759   39321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:59:09.709777   39321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:59:10.453324   39321 out.go:204]   - Generating certificates and keys ...
	I0629 11:59:08.103868   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:10.600504   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:13.100908   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:15.103349   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:11.075112   39321 out.go:204]   - Booting up control plane ...
	I0629 11:59:17.600597   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:19.602441   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:22.101027   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:24.601921   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:27.102740   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:29.103218   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:31.602024   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:33.603482   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:36.104291   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:38.601027   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:40.602533   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:42.604039   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:45.105214   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:47.603677   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:49.606151   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:52.104004   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:54.106224   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:56.605130   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:58.606838   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:01.105420   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:03.107040   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:05.605975   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:07.607176   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:09.607415   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:12.108174   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:14.607016   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:16.608058   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:18.608278   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:21.108388   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:23.110530   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:25.609089   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:27.610444   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:30.108624   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:32.109598   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:34.613349   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:37.108006   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:39.109710   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:41.608341   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:43.610410   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:46.106908   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:48.108652   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:50.608608   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:52.609008   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:55.109271   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:57.610864   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:00.109777   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:02.109951   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:04.110413   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:06.018998   39321 kubeadm.go:397] StartCluster complete in 7m59.760603139s
	I0629 12:01:06.019078   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 12:01:06.047361   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.083489   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 12:01:06.083580   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 12:01:06.118045   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.118058   39321 logs.go:276] No container was found matching "etcd"
	I0629 12:01:06.118119   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 12:01:06.148512   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.148524   39321 logs.go:276] No container was found matching "coredns"
	I0629 12:01:06.148587   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 12:01:06.177707   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.177719   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 12:01:06.177776   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 12:01:06.210822   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.210835   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 12:01:06.210895   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 12:01:06.243800   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.243812   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 12:01:06.243868   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 12:01:06.274291   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.274305   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 12:01:06.274368   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 12:01:06.308104   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.308119   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 12:01:06.308126   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 12:01:06.308133   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 12:01:06.347949   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 12:01:06.347968   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 12:01:06.361249   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 12:01:06.361264   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 12:01:06.413780   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 12:01:06.413793   39321 logs.go:123] Gathering logs for Docker ...
	I0629 12:01:06.413800   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 12:01:06.427622   39321 logs.go:123] Gathering logs for container status ...
	I0629 12:01:06.427633   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 12:01:08.487011   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059302402s)
	W0629 12:01:08.487125   39321 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 12:01:08.487150   39321 out.go:239] * 
	W0629 12:01:08.487259   39321 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.487274   39321 out.go:239] * 
	W0629 12:01:08.487946   39321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 12:01:08.550616   39321 out.go:177] 
	W0629 12:01:08.592802   39321 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.592939   39321 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 12:01:08.593004   39321 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 12:01:08.634371   39321 out.go:177] 
	I0629 12:01:06.612352   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:09.109064   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:11.110458   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:13.611037   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:16.110193   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:18.610846   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:21.112357   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:23.610136   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:25.612152   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:28.111145   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:30.609416   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:32.611545   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:35.110917   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:37.111203   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:39.611000   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:41.618644   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:44.111165   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:46.612134   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:49.111541   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:51.611950   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:53.104400   39984 pod_ready.go:81] duration metric: took 4m0.003882789s waiting for pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace to be "Ready" ...
	E0629 12:01:53.104484   39984 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 12:01:53.104529   39984 pod_ready.go:38] duration metric: took 4m11.048321541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:01:53.104568   39984 kubeadm.go:630] restartCluster took 4m21.657198964s
	W0629 12:01:53.104718   39984 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 12:01:53.104746   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0629 12:01:55.470034   39984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.365199161s)
	I0629 12:01:55.470094   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:01:55.480295   39984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:01:55.488199   39984 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 12:01:55.488247   39984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:01:55.495381   39984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 12:01:55.495403   39984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 12:01:55.784055   39984 out.go:204]   - Generating certificates and keys ...
	I0629 12:01:56.585517   39984 out.go:204]   - Booting up control plane ...
	I0629 12:02:03.144758   39984 out.go:204]   - Configuring RBAC rules ...
	I0629 12:02:03.522675   39984 cni.go:95] Creating CNI manager for ""
	I0629 12:02:03.522687   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:02:03.522702   39984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:02:03.522795   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:03.522801   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=embed-certs-20220629115611-24356 minikube.k8s.io/updated_at=2022_06_29T12_02_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:03.661591   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:03.661594   39984 ops.go:34] apiserver oom_adj: -16
	I0629 12:02:04.218161   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:04.718171   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:05.217246   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:05.717318   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:06.218501   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:06.718570   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:07.216723   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:07.717314   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:08.218695   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:08.718739   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:09.216580   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:09.718244   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:10.216838   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:10.716640   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:11.216892   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:11.716821   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:12.217178   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:12.716857   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:13.218711   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:13.716758   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:14.218899   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:14.718977   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:15.217017   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:15.718977   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:16.216819   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:16.717000   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:16.789950   39984 kubeadm.go:1045] duration metric: took 13.266821249s to wait for elevateKubeSystemPrivileges.
	I0629 12:02:16.789969   39984 kubeadm.go:397] StartCluster complete in 4m45.378983921s
	I0629 12:02:16.789985   39984 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:02:16.790067   39984 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:02:16.790800   39984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:02:17.305700   39984 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220629115611-24356" rescaled to 1
	I0629 12:02:17.305741   39984 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:02:17.305750   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:02:17.305782   39984 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:02:17.305909   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:02:17.329199   39984 out.go:177] * Verifying Kubernetes components...
	I0629 12:02:17.329263   39984 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220629115611-24356"
	I0629 12:02:17.329270   39984 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220629115611-24356"
	I0629 12:02:17.387378   39984 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220629115611-24356"
	I0629 12:02:17.387386   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:02:17.329275   39984 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220629115611-24356"
	W0629 12:02:17.387414   39984 addons.go:162] addon metrics-server should already be in state true
	I0629 12:02:17.387427   39984 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220629115611-24356"
	I0629 12:02:17.329281   39984 addons.go:65] Setting dashboard=true in profile "embed-certs-20220629115611-24356"
	W0629 12:02:17.387455   39984 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:02:17.387480   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.387480   39984 addons.go:153] Setting addon dashboard=true in "embed-certs-20220629115611-24356"
	I0629 12:02:17.387475   39984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220629115611-24356"
	W0629 12:02:17.387500   39984 addons.go:162] addon dashboard should already be in state true
	I0629 12:02:17.387557   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.387573   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.388046   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.388231   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.389797   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.392847   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.402122   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 12:02:17.443969   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.544262   39984 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0629 12:02:17.545082   39984 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220629115611-24356"
	W0629 12:02:17.581288   39984 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:02:17.618472   39984 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:02:17.640175   39984 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0629 12:02:17.640200   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.661256   39984 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:02:17.682381   39984 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:02:17.724466   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:02:17.682806   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.724505   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:02:17.724534   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:02:17.703153   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:02:17.724565   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:02:17.724601   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.724690   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.724709   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.729992   39984 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220629115611-24356" to be "Ready" ...
	I0629 12:02:17.753250   39984 node_ready.go:49] node "embed-certs-20220629115611-24356" has status "Ready":"True"
	I0629 12:02:17.753270   39984 node_ready.go:38] duration metric: took 23.142427ms waiting for node "embed-certs-20220629115611-24356" to be "Ready" ...
	I0629 12:02:17.753280   39984 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:02:17.761227   39984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:17.834816   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.835163   39984 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:02:17.835173   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:02:17.835233   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.835920   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.837624   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.918047   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.967072   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:02:17.967089   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:02:17.976382   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:02:18.044346   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:02:18.044362   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:02:18.082048   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:02:18.082062   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:02:18.146371   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:02:18.146455   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:02:18.179040   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:02:18.179047   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:02:18.179057   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:02:18.246519   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:02:18.246537   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:02:18.345640   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:02:18.353148   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:02:18.353160   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:02:18.454046   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:02:18.454069   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:02:18.472492   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:02:18.472505   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:02:18.547601   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:02:18.547613   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:02:18.578632   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:02:18.578647   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:02:18.648142   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:02:18.648163   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:02:18.681571   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:02:18.750483   39984 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.348276451s)
	I0629 12:02:18.750500   39984 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 12:02:18.888795   39984 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220629115611-24356"
	I0629 12:02:19.589385   39984 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0629 12:02:19.648351   39984 addons.go:414] enableAddons completed in 2.342474185s
	I0629 12:02:19.777473   39984 pod_ready.go:102] pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace has status "Ready":"False"
	I0629 12:02:21.779635   39984 pod_ready.go:102] pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace has status "Ready":"False"
	I0629 12:02:22.776738   39984 pod_ready.go:92] pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.776752   39984 pod_ready.go:81] duration metric: took 5.015355158s waiting for pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.776758   39984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-689nj" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.781084   39984 pod_ready.go:92] pod "coredns-6d4b75cb6d-689nj" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.781092   39984 pod_ready.go:81] duration metric: took 4.329231ms waiting for pod "coredns-6d4b75cb6d-689nj" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.781098   39984 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.784942   39984 pod_ready.go:92] pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.784949   39984 pod_ready.go:81] duration metric: took 3.847521ms waiting for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.784955   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.788894   39984 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.788903   39984 pod_ready.go:81] duration metric: took 3.933089ms waiting for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.788909   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.792968   39984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.792976   39984 pod_ready.go:81] duration metric: took 4.054757ms waiting for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.792982   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9whjc" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.174612   39984 pod_ready.go:92] pod "kube-proxy-9whjc" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:23.174622   39984 pod_ready.go:81] duration metric: took 381.624505ms waiting for pod "kube-proxy-9whjc" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.174628   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.574939   39984 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:23.574948   39984 pod_ready.go:81] duration metric: took 400.303754ms waiting for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.574954   39984 pod_ready.go:38] duration metric: took 5.821490673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:02:23.574966   39984 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:02:23.575014   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:02:23.584594   39984 api_server.go:71] duration metric: took 6.278645942s to wait for apiserver process to appear ...
	I0629 12:02:23.584605   39984 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:02:23.584614   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 12:02:23.589756   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 200:
	ok
	I0629 12:02:23.590804   39984 api_server.go:140] control plane version: v1.24.2
	I0629 12:02:23.590813   39984 api_server.go:130] duration metric: took 6.203753ms to wait for apiserver health ...
	I0629 12:02:23.590818   39984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:02:23.777474   39984 system_pods.go:59] 9 kube-system pods found
	I0629 12:02:23.777488   39984 system_pods.go:61] "coredns-6d4b75cb6d-4bfwq" [9ea6d67d-f471-4bb3-9201-579f2d373e85] Running
	I0629 12:02:23.777492   39984 system_pods.go:61] "coredns-6d4b75cb6d-689nj" [23db562d-ab6b-4c56-8d94-31aea6542072] Running
	I0629 12:02:23.777495   39984 system_pods.go:61] "etcd-embed-certs-20220629115611-24356" [54618f39-914f-4ec2-9df9-a250f11c9a2c] Running
	I0629 12:02:23.777512   39984 system_pods.go:61] "kube-apiserver-embed-certs-20220629115611-24356" [3907cf9f-b479-4990-a2bb-00926370ca98] Running
	I0629 12:02:23.777519   39984 system_pods.go:61] "kube-controller-manager-embed-certs-20220629115611-24356" [5fc7c4d6-5c8c-40b7-a170-12edad850417] Running
	I0629 12:02:23.777524   39984 system_pods.go:61] "kube-proxy-9whjc" [a127008e-42de-4155-a698-e83602edb663] Running
	I0629 12:02:23.777527   39984 system_pods.go:61] "kube-scheduler-embed-certs-20220629115611-24356" [35f4ef5a-3772-4f5e-836b-8feaebdadb30] Running
	I0629 12:02:23.777532   39984 system_pods.go:61] "metrics-server-5c6f97fb75-plpnv" [af632ef8-e7ac-46ee-b7a0-3552276f17e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:02:23.777536   39984 system_pods.go:61] "storage-provisioner" [4c55837a-95e7-48e8-a535-c3dcd1a36389] Running
	I0629 12:02:23.777540   39984 system_pods.go:74] duration metric: took 186.713608ms to wait for pod list to return data ...
	I0629 12:02:23.777545   39984 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:02:23.975562   39984 default_sa.go:45] found service account: "default"
	I0629 12:02:23.975577   39984 default_sa.go:55] duration metric: took 198.02077ms for default service account to be created ...
	I0629 12:02:23.975583   39984 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 12:02:24.177955   39984 system_pods.go:86] 8 kube-system pods found
	I0629 12:02:24.177971   39984 system_pods.go:89] "coredns-6d4b75cb6d-4bfwq" [9ea6d67d-f471-4bb3-9201-579f2d373e85] Running
	I0629 12:02:24.177976   39984 system_pods.go:89] "etcd-embed-certs-20220629115611-24356" [54618f39-914f-4ec2-9df9-a250f11c9a2c] Running
	I0629 12:02:24.177995   39984 system_pods.go:89] "kube-apiserver-embed-certs-20220629115611-24356" [3907cf9f-b479-4990-a2bb-00926370ca98] Running
	I0629 12:02:24.178003   39984 system_pods.go:89] "kube-controller-manager-embed-certs-20220629115611-24356" [5fc7c4d6-5c8c-40b7-a170-12edad850417] Running
	I0629 12:02:24.178008   39984 system_pods.go:89] "kube-proxy-9whjc" [a127008e-42de-4155-a698-e83602edb663] Running
	I0629 12:02:24.178012   39984 system_pods.go:89] "kube-scheduler-embed-certs-20220629115611-24356" [35f4ef5a-3772-4f5e-836b-8feaebdadb30] Running
	I0629 12:02:24.178021   39984 system_pods.go:89] "metrics-server-5c6f97fb75-plpnv" [af632ef8-e7ac-46ee-b7a0-3552276f17e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:02:24.178025   39984 system_pods.go:89] "storage-provisioner" [4c55837a-95e7-48e8-a535-c3dcd1a36389] Running
	I0629 12:02:24.178034   39984 system_pods.go:126] duration metric: took 202.438161ms to wait for k8s-apps to be running ...
	I0629 12:02:24.178039   39984 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 12:02:24.178092   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:02:24.187986   39984 system_svc.go:56] duration metric: took 9.942208ms WaitForService to wait for kubelet.
	I0629 12:02:24.187999   39984 kubeadm.go:572] duration metric: took 6.882034131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 12:02:24.188014   39984 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:02:24.373750   39984 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:02:24.373764   39984 node_conditions.go:123] node cpu capacity is 6
	I0629 12:02:24.373770   39984 node_conditions.go:105] duration metric: took 185.747482ms to run NodePressure ...
	I0629 12:02:24.373781   39984 start.go:213] waiting for startup goroutines ...
	I0629 12:02:24.406628   39984 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:02:24.428518   39984 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220629115611-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:57:27 UTC, end at Wed 2022-06-29 19:03:24 UTC. --
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.344653562Z" level=info msg="ignoring event" container=91369401a41a69db9139878aca0c84b5afbb6b656949c91c21f7be7ef769704e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.413723556Z" level=info msg="ignoring event" container=74b9c8039c68de94aef80fac9295fec375579b112170e84c20b63dfc45899bd6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.529378487Z" level=info msg="ignoring event" container=96288adac2971581c6d2c4c033dc09ff170abc16d0d0dda520365f3297c04022 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.598650013Z" level=info msg="ignoring event" container=e22b58712661000c3e2d01940b52aeaaeb635743bfc6b48086ae504fb5e3022f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.686258850Z" level=info msg="ignoring event" container=95c1df9a1a6ba6a12b9cb98ec2fa3176fe8ea24f7e09c64653ee7ad6e55283ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.755146311Z" level=info msg="ignoring event" container=88a08097920d3e73aed93851f84963df312ee582664b477d8166eb3c17d3f96d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.880985456Z" level=info msg="ignoring event" container=b9125da07096037dcfb4169fefd294375762310eaf048649823e62539d960fde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.943055336Z" level=info msg="ignoring event" container=8f01f79963fb4dc2ae009f24b771d96c97d216851a0fd85da9edfc69202daf1c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.997050114Z" level=info msg="ignoring event" container=3193b00d499be3b9a792ecd0cd7f6d32d625701f749046b1f97ace72db3188d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:55 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:55.062688681Z" level=info msg="ignoring event" container=d60146d2972f6dd6062eadda047abbb34545b63d33a5251582617cc87c9cd836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:55 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:55.147198478Z" level=info msg="ignoring event" container=853ad8f1abb5efa793c0ef8da991c8b11c9516bad27d847f866a6caeb012a8ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:19 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:19.839611088Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:19 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:19.839700758Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:19 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:19.840824717Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:22 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:22.204960248Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:02:22 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:22.896348617Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:02:23 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:23.197208235Z" level=info msg="ignoring event" container=d4907e1b916f279140de583694aa93b58711450056410b641d5b42dd6bdaf036 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:23 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:23.244818769Z" level=info msg="ignoring event" container=73b9e1fe7949a8bcc52a86b2267f357847ac00074ffdebff6f1053f516a9ac99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:28 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:28.371373191Z" level=info msg="ignoring event" container=d947c4d626d5b838dbe1032f278e1ff0bafc3033f80442457b8e29730d378c9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:28 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:28.413306022Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 19:02:29 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:29.268936321Z" level=info msg="ignoring event" container=27d19a135b91affbcc9966b5c5da10f66bc519d33861c1c99916f518bc04d89b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:33 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:33.358543495Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:33 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:33.358648626Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:33 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:33.454775180Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:47 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:47.148580207Z" level=info msg="ignoring event" container=de7b046723782bfee336a6ac80f1646f3a101a4e6dccc317099232c6073a425c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	de7b046723782       a90209bb39e3d                                                                                    38 seconds ago       Exited              dashboard-metrics-scraper   2                   72448c285b667
	bb35f1a1cbfe6       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   07593a17b1782
	102e7e31fe20b       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   a5521e8c4785d
	e78d061ffbf60       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   9af798338e9fe
	e456f8380e066       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   3e8d13bde5534
	19002a7796106       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   fd08f90e169f7
	0fc0a18250b47       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   e221f2a8d00a4
	ff2d33804dec1       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   55361e8c8398c
	11df91bcbbca9       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   b9862ec2f987f
	
	* 
	* ==> coredns [e78d061ffbf6] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220629115611-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220629115611-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=embed-certs-20220629115611-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T12_02_03_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220629115611-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:02:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220629115611-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                762c4854-29ab-4ef1-b3c6-183c64d29e4d
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4bfwq                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     68s
	  kube-system                 etcd-embed-certs-20220629115611-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-embed-certs-20220629115611-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-embed-certs-20220629115611-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-9whjc                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-embed-certs-20220629115611-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 metrics-server-5c6f97fb75-plpnv                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         67s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-5tqfn                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-9qp4w                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 67s   kube-proxy       
	  Normal  Starting                 82s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  82s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientPID
	  Normal  NodeReady                82s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeReady
	  Normal  RegisteredNode           69s   node-controller  Node embed-certs-20220629115611-24356 event: Registered Node embed-certs-20220629115611-24356 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [11df91bcbbca] <==
	* {"level":"info","ts":"2022-06-29T19:01:57.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T19:01:57.920Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220629115611-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:01:58.175Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:01:58.176Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:01:58.176Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T19:01:58.176Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T19:01:58.184Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:01:58.186Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:01:58.186Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:01:58.186Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  19:03:25 up  1:11,  0 users,  load average: 0.73, 1.22, 1.35
	Linux embed-certs-20220629115611-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0fc0a18250b4] <==
	* I0629 19:02:02.778315       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 19:02:03.354996       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 19:02:03.360642       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0629 19:02:03.367973       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 19:02:03.446585       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 19:02:16.725027       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:02:16.776119       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0629 19:02:17.498436       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 19:02:18.895557       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.111.153.97]
	I0629 19:02:19.509467       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.32.89]
	I0629 19:02:19.573604       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.33.19]
	W0629 19:02:19.860636       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:02:19.860675       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:02:19.860688       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:02:19.860759       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:02:19.860892       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:02:19.862353       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:03:21.823512       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:03:21.823549       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:03:21.823556       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:03:21.834549       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:03:21.834592       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:03:21.834599       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [ff2d33804dec] <==
	* I0629 19:02:17.048254       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-689nj"
	I0629 19:02:18.790767       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 19:02:18.794513       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0629 19:02:18.798395       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0629 19:02:18.803603       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-plpnv"
	I0629 19:02:19.404662       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 19:02:19.408057       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:02:19.410218       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0629 19:02:19.411974       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.413307       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:02:19.416046       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:02:19.416422       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:02:19.418334       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:02:19.425479       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.425680       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:02:19.426776       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.426860       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:02:19.428239       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.428284       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:02:19.463518       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-5tqfn"
	I0629 19:02:19.463551       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-9qp4w"
	E0629 19:02:46.220184       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:02:46.634461       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 19:03:22.070098       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:03:22.121105       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e456f8380e06] <==
	* I0629 19:02:17.409283       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:02:17.409366       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:02:17.409453       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:02:17.489344       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:02:17.489451       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:02:17.489464       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:02:17.489480       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:02:17.489505       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:02:17.489620       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:02:17.489931       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:02:17.489973       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:02:17.490623       1 config.go:317] "Starting service config controller"
	I0629 19:02:17.490665       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:02:17.490721       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:02:17.490786       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:02:17.491458       1 config.go:444] "Starting node config controller"
	I0629 19:02:17.491467       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:02:17.590807       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:02:17.590917       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 19:02:17.592071       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [19002a779610] <==
	* W0629 19:02:00.688563       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 19:02:00.688598       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 19:02:00.688683       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0629 19:02:00.688716       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0629 19:02:00.688725       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 19:02:00.688735       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 19:02:00.688992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0629 19:02:00.689026       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0629 19:02:00.689180       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 19:02:00.689208       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 19:02:00.689772       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:02:00.689890       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:02:00.689910       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 19:02:00.689922       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 19:02:00.690025       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:02:00.690587       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:02:00.690273       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0629 19:02:00.690825       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0629 19:02:00.690316       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0629 19:02:00.690879       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0629 19:02:01.757107       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:02:01.757143       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:02:01.758805       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 19:02:01.758869       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0629 19:02:01.986939       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:57:27 UTC, end at Wed 2022-06-29 19:03:26 UTC. --
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529282    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbhk\" (UniqueName: \"kubernetes.io/projected/719c4863-f095-450d-bdbf-445aa7750857-kube-api-access-vjbhk\") pod \"dashboard-metrics-scraper-dffd48c4c-5tqfn\" (UID: \"719c4863-f095-450d-bdbf-445aa7750857\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-5tqfn"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529320    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4c55837a-95e7-48e8-a535-c3dcd1a36389-tmp\") pod \"storage-provisioner\" (UID: \"4c55837a-95e7-48e8-a535-c3dcd1a36389\") " pod="kube-system/storage-provisioner"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529657    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhwr\" (UniqueName: \"kubernetes.io/projected/4c55837a-95e7-48e8-a535-c3dcd1a36389-kube-api-access-nfhwr\") pod \"storage-provisioner\" (UID: \"4c55837a-95e7-48e8-a535-c3dcd1a36389\") " pod="kube-system/storage-provisioner"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529683    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a127008e-42de-4155-a698-e83602edb663-xtables-lock\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529700    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kmx\" (UniqueName: \"kubernetes.io/projected/af632ef8-e7ac-46ee-b7a0-3552276f17e9-kube-api-access-96kmx\") pod \"metrics-server-5c6f97fb75-plpnv\" (UID: \"af632ef8-e7ac-46ee-b7a0-3552276f17e9\") " pod="kube-system/metrics-server-5c6f97fb75-plpnv"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529918    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmd6\" (UniqueName: \"kubernetes.io/projected/9ea6d67d-f471-4bb3-9201-579f2d373e85-kube-api-access-cqmd6\") pod \"coredns-6d4b75cb6d-4bfwq\" (UID: \"9ea6d67d-f471-4bb3-9201-579f2d373e85\") " pod="kube-system/coredns-6d4b75cb6d-4bfwq"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529940    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2e8b31a8-de1f-45db-90b7-8d4b00453b5b-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-9qp4w\" (UID: \"2e8b31a8-de1f-45db-90b7-8d4b00453b5b\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-9qp4w"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530134    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a127008e-42de-4155-a698-e83602edb663-kube-proxy\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530153    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a127008e-42de-4155-a698-e83602edb663-lib-modules\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530168    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4c7\" (UniqueName: \"kubernetes.io/projected/a127008e-42de-4155-a698-e83602edb663-kube-api-access-ct4c7\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530184    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af632ef8-e7ac-46ee-b7a0-3552276f17e9-tmp-dir\") pod \"metrics-server-5c6f97fb75-plpnv\" (UID: \"af632ef8-e7ac-46ee-b7a0-3552276f17e9\") " pod="kube-system/metrics-server-5c6f97fb75-plpnv"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530198    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/719c4863-f095-450d-bdbf-445aa7750857-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-5tqfn\" (UID: \"719c4863-f095-450d-bdbf-445aa7750857\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-5tqfn"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530474    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwww\" (UniqueName: \"kubernetes.io/projected/2e8b31a8-de1f-45db-90b7-8d4b00453b5b-kube-api-access-5jwww\") pod \"kubernetes-dashboard-5fd5574d9f-9qp4w\" (UID: \"2e8b31a8-de1f-45db-90b7-8d4b00453b5b\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-9qp4w"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530533    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ea6d67d-f471-4bb3-9201-579f2d373e85-config-volume\") pod \"coredns-6d4b75cb6d-4bfwq\" (UID: \"9ea6d67d-f471-4bb3-9201-579f2d373e85\") " pod="kube-system/coredns-6d4b75cb6d-4bfwq"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530547    9859 reconciler.go:157] "Reconciler: start to sync state"
	Jun 29 19:03:24 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:24.694943    9859 request.go:601] Waited for 1.153673276s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 29 19:03:24 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:24.699782    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220629115611-24356\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220629115611-24356"
	Jun 29 19:03:24 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:24.878783    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220629115611-24356\" already exists" pod="kube-system/etcd-embed-certs-20220629115611-24356"
	Jun 29 19:03:25 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:25.084614    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220629115611-24356\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220629115611-24356"
	Jun 29 19:03:25 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:25.355398    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220629115611-24356\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220629115611-24356"
	Jun 29 19:03:25 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:25.877911    9859 scope.go:110] "RemoveContainer" containerID="de7b046723782bfee336a6ac80f1646f3a101a4e6dccc317099232c6073a425c"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310530    9859 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310588    9859 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310706    9859 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-96kmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-plpnv_kube-system(af632ef8-e7ac-46ee-b7a0-3552276f17e9): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310756    9859 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-plpnv" podUID=af632ef8-e7ac-46ee-b7a0-3552276f17e9
	
	* 
	* ==> kubernetes-dashboard [bb35f1a1cbfe] <==
	* 2022/06/29 19:02:33 Using namespace: kubernetes-dashboard
	2022/06/29 19:02:33 Using in-cluster config to connect to apiserver
	2022/06/29 19:02:33 Using secret token for csrf signing
	2022/06/29 19:02:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 19:02:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 19:02:33 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 19:02:33 Generating JWE encryption key
	2022/06/29 19:02:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 19:02:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 19:02:33 Initializing JWE encryption key from synchronized object
	2022/06/29 19:02:33 Creating in-cluster Sidecar client
	2022/06/29 19:02:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:02:33 Serving insecurely on HTTP port: 9090
	2022/06/29 19:03:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:02:33 Starting overwatch
	
	* 
	* ==> storage-provisioner [102e7e31fe20] <==
	* I0629 19:02:19.797530       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:02:19.807242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:02:19.807312       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:02:19.814340       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:02:19.814480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220629115611-24356_7a156cbb-c819-42e7-8200-404bba168a92!
	I0629 19:02:19.814773       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1f2f454-8c35-4f18-b5aa-3ee51954718a", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220629115611-24356_7a156cbb-c819-42e7-8200-404bba168a92 became leader
	I0629 19:02:19.914865       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220629115611-24356_7a156cbb-c819-42e7-8200-404bba168a92!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-plpnv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 describe pod metrics-server-5c6f97fb75-plpnv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220629115611-24356 describe pod metrics-server-5c6f97fb75-plpnv: exit status 1 (272.420428ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-plpnv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220629115611-24356 describe pod metrics-server-5c6f97fb75-plpnv: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220629115611-24356
helpers_test.go:235: (dbg) docker inspect embed-certs-20220629115611-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83",
	        "Created": "2022-06-29T18:56:18.782942519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 267710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:57:27.28926741Z",
	            "FinishedAt": "2022-06-29T18:57:25.331741028Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/hostname",
	        "HostsPath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/hosts",
	        "LogPath": "/var/lib/docker/containers/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83/3865641b000a94b244654e77d9ca8816e1b071bda8a922bdc344b38142578e83-json.log",
	        "Name": "/embed-certs-20220629115611-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220629115611-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220629115611-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47cad10a02fffea2aa9b72eaa908bbbb3e99dbb8d86b78bc4a28d35041dce0e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220629115611-24356",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220629115611-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220629115611-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220629115611-24356",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220629115611-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3532b4634d6cee6d2a3c955d4512246775f2b9b5ecf455de20e03773d8343824",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60811"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60812"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60814"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60815"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3532b4634d6c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220629115611-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3865641b000a",
	                        "embed-certs-20220629115611-24356"
	                    ],
	                    "NetworkID": "789a63df411698d553bc4fa5ef7a823a36c8c59abd40f53ac9c3c90a49d15914",
	                    "EndpointID": "7c491b8aa76b8c8f205c04c8e63195f0cc7c6f91ded822eb6800f496aad24108",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220629115611-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220629115611-24356 logs -n 25: (2.690605307s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT | 29 Jun 22 11:47 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:47 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:48 PDT |
	|         | kubenet-20220629112950-24356                      |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:48 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:49 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:49 PDT | 29 Jun 22 11:54 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --preload=false                       |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:51 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:52 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT | 29 Jun 22 11:53 PDT |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 11:57:26
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 11:57:26.028245   39984 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:57:26.028421   39984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:57:26.028426   39984 out.go:309] Setting ErrFile to fd 2...
	I0629 11:57:26.028430   39984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:57:26.028744   39984 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:57:26.029007   39984 out.go:303] Setting JSON to false
	I0629 11:57:26.044844   39984 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10614,"bootTime":1656518432,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:57:26.044930   39984 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:57:26.071215   39984 out.go:177] * [embed-certs-20220629115611-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:57:26.114439   39984 notify.go:193] Checking for updates...
	I0629 11:57:26.136279   39984 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:57:26.158396   39984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:57:26.180197   39984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:57:26.201576   39984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:57:26.223504   39984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:57:26.245798   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:57:26.246444   39984 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:57:26.316909   39984 docker.go:137] docker version: linux-20.10.16
	I0629 11:57:26.317080   39984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:57:26.446690   39984 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:57:26.381567768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:57:26.468611   39984 out.go:177] * Using the docker driver based on existing profile
	I0629 11:57:26.489667   39984 start.go:284] selected driver: docker
	I0629 11:57:26.489698   39984 start.go:808] validating driver "docker" against &{Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:26.489832   39984 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:57:26.493277   39984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:57:26.615477   39984 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:57:26.552906823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:57:26.615651   39984 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 11:57:26.615666   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:26.615676   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:26.615683   39984 start_flags.go:310] config:
	{Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:26.659812   39984 out.go:177] * Starting control plane node embed-certs-20220629115611-24356 in cluster embed-certs-20220629115611-24356
	I0629 11:57:26.681749   39984 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 11:57:26.703472   39984 out.go:177] * Pulling base image ...
	I0629 11:57:26.745579   39984 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 11:57:26.745590   39984 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:57:26.745645   39984 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 11:57:26.745663   39984 cache.go:57] Caching tarball of preloaded images
	I0629 11:57:26.745789   39984 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 11:57:26.745807   39984 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 11:57:26.746584   39984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/config.json ...
	I0629 11:57:26.809113   39984 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 11:57:26.809128   39984 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 11:57:26.809140   39984 cache.go:208] Successfully downloaded all kic artifacts
	I0629 11:57:26.809200   39984 start.go:352] acquiring machines lock for embed-certs-20220629115611-24356: {Name:mk0bdb566e64e1b997b63c331e0b76362860de65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 11:57:26.809294   39984 start.go:356] acquired machines lock for "embed-certs-20220629115611-24356" in 67.417µs
	I0629 11:57:26.809317   39984 start.go:94] Skipping create...Using existing machine configuration
	I0629 11:57:26.809326   39984 fix.go:55] fixHost starting: 
	I0629 11:57:26.809545   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 11:57:26.877064   39984 fix.go:103] recreateIfNeeded on embed-certs-20220629115611-24356: state=Stopped err=<nil>
	W0629 11:57:26.877097   39984 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 11:57:26.921097   39984 out.go:177] * Restarting existing docker container for "embed-certs-20220629115611-24356" ...
	I0629 11:57:26.943046   39984 cli_runner.go:164] Run: docker start embed-certs-20220629115611-24356
	I0629 11:57:27.298057   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 11:57:27.370883   39984 kic.go:416] container "embed-certs-20220629115611-24356" state is running.
	I0629 11:57:27.371467   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:27.450035   39984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/config.json ...
	I0629 11:57:27.450491   39984 machine.go:88] provisioning docker machine ...
	I0629 11:57:27.450523   39984 ubuntu.go:169] provisioning hostname "embed-certs-20220629115611-24356"
	I0629 11:57:27.450615   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:27.526657   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:27.526849   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:27.526862   39984 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220629115611-24356 && echo "embed-certs-20220629115611-24356" | sudo tee /etc/hostname
	I0629 11:57:27.655714   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220629115611-24356
	
	I0629 11:57:27.655798   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:27.730765   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:27.730938   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:27.730953   39984 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220629115611-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220629115611-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220629115611-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 11:57:27.848950   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:57:27.848968   39984 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 11:57:27.848989   39984 ubuntu.go:177] setting up certificates
	I0629 11:57:27.848996   39984 provision.go:83] configureAuth start
	I0629 11:57:27.849084   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:27.929959   39984 provision.go:138] copyHostCerts
	I0629 11:57:27.930123   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 11:57:27.930147   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 11:57:27.930263   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 11:57:27.930508   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 11:57:27.930517   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 11:57:27.930576   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 11:57:27.930756   39984 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 11:57:27.930764   39984 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 11:57:27.930836   39984 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 11:57:27.930964   39984 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220629115611-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220629115611-24356]
	I0629 11:57:27.999428   39984 provision.go:172] copyRemoteCerts
	I0629 11:57:27.999495   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 11:57:27.999547   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.073332   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:28.161829   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 11:57:28.180214   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0629 11:57:28.196728   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 11:57:28.213826   39984 provision.go:86] duration metric: configureAuth took 364.804405ms
	I0629 11:57:28.213840   39984 ubuntu.go:193] setting minikube options for container-runtime
	I0629 11:57:28.214049   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:57:28.214114   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.285550   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.285697   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.285709   39984 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 11:57:28.404316   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 11:57:28.404329   39984 ubuntu.go:71] root file system type: overlay
	I0629 11:57:28.404488   39984 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 11:57:28.404565   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.475355   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.475494   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.475543   39984 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 11:57:28.601145   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 11:57:28.601241   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.672126   39984 main.go:134] libmachine: Using SSH client type: native
	I0629 11:57:28.672296   39984 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 60811 <nil> <nil>}
	I0629 11:57:28.672310   39984 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 11:57:28.795931   39984 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 11:57:28.795946   39984 machine.go:91] provisioned docker machine in 1.345405346s
	I0629 11:57:28.795961   39984 start.go:306] post-start starting for "embed-certs-20220629115611-24356" (driver="docker")
	I0629 11:57:28.795968   39984 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 11:57:28.796037   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 11:57:28.796087   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:28.866293   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:28.951759   39984 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 11:57:28.955285   39984 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 11:57:28.955300   39984 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 11:57:28.955307   39984 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 11:57:28.955312   39984 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 11:57:28.955321   39984 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 11:57:28.955430   39984 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 11:57:28.955566   39984 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 11:57:28.955718   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 11:57:28.962930   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:57:28.979721   39984 start.go:309] post-start completed in 183.73758ms
	I0629 11:57:28.979798   39984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:57:28.979853   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.052656   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.137653   39984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 11:57:29.142085   39984 fix.go:57] fixHost completed within 2.332689804s
	I0629 11:57:29.142096   39984 start.go:81] releasing machines lock for "embed-certs-20220629115611-24356", held for 2.332724366s
	I0629 11:57:29.142164   39984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220629115611-24356
	I0629 11:57:29.211897   39984 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 11:57:29.211897   39984 ssh_runner.go:195] Run: systemctl --version
	I0629 11:57:29.211957   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.211969   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:29.288098   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.290800   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 11:57:29.373189   39984 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 11:57:29.857399   39984 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 11:57:29.857467   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 11:57:29.869954   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 11:57:29.883131   39984 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 11:57:29.955029   39984 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 11:57:30.019548   39984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:57:30.090812   39984 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 11:57:30.329132   39984 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 11:57:30.399299   39984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 11:57:30.472742   39984 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 11:57:30.482620   39984 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 11:57:30.482690   39984 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 11:57:30.486666   39984 start.go:468] Will wait 60s for crictl version
	I0629 11:57:30.486722   39984 ssh_runner.go:195] Run: sudo crictl version
	I0629 11:57:30.587073   39984 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 11:57:30.587149   39984 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:57:30.622161   39984 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 11:57:30.700040   39984 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 11:57:30.700166   39984 cli_runner.go:164] Run: docker exec -t embed-certs-20220629115611-24356 dig +short host.docker.internal
	I0629 11:57:30.827612   39984 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 11:57:30.827718   39984 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 11:57:30.831832   39984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:57:30.841288   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:30.913390   39984 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 11:57:30.913460   39984 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:57:30.944383   39984 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 11:57:30.944399   39984 docker.go:533] Images already preloaded, skipping extraction
	I0629 11:57:30.944478   39984 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 11:57:30.975315   39984 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 11:57:30.975343   39984 cache_images.go:84] Images are preloaded, skipping loading
	I0629 11:57:30.975415   39984 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 11:57:31.045851   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:31.050165   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:31.050195   39984 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 11:57:31.050222   39984 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220629115611-24356 NodeName:embed-certs-20220629115611-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 11:57:31.050404   39984 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220629115611-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 11:57:31.050551   39984 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220629115611-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 11:57:31.050644   39984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 11:57:31.059402   39984 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 11:57:31.059454   39984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 11:57:31.066631   39984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0629 11:57:31.079513   39984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 11:57:31.092419   39984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0629 11:57:31.105233   39984 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 11:57:31.108958   39984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 11:57:31.118325   39984 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356 for IP: 192.168.67.2
	I0629 11:57:31.118436   39984 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 11:57:31.118497   39984 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 11:57:31.118573   39984 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/client.key
	I0629 11:57:31.118636   39984 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.key.c7fa3a9e
	I0629 11:57:31.118686   39984 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.key
	I0629 11:57:31.118892   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 11:57:31.118931   39984 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 11:57:31.118944   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 11:57:31.118978   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 11:57:31.119010   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 11:57:31.119037   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 11:57:31.119098   39984 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 11:57:31.119668   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 11:57:31.136862   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0629 11:57:31.153564   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 11:57:31.170777   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/embed-certs-20220629115611-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0629 11:57:31.187816   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 11:57:31.204573   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 11:57:31.221464   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 11:57:31.239026   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 11:57:31.255730   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 11:57:31.272688   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 11:57:31.289538   39984 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 11:57:31.306720   39984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 11:57:31.319465   39984 ssh_runner.go:195] Run: openssl version
	I0629 11:57:31.324535   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 11:57:31.332540   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.336652   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.336698   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 11:57:31.342301   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 11:57:31.349622   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 11:57:31.357282   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.361696   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.361747   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 11:57:31.366990   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 11:57:31.374502   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 11:57:31.382218   39984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.385803   39984 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.385848   39984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 11:57:31.390826   39984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 11:57:31.397764   39984 kubeadm.go:395] StartCluster: {Name:embed-certs-20220629115611-24356 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220629115611-24356 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:57:31.397873   39984 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:57:31.427173   39984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 11:57:31.434832   39984 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 11:57:31.434846   39984 kubeadm.go:626] restartCluster start
	I0629 11:57:31.434897   39984 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 11:57:31.441586   39984 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.441651   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 11:57:31.513483   39984 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220629115611-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:57:31.513643   39984 kubeconfig.go:127] "embed-certs-20220629115611-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 11:57:31.513999   39984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 11:57:31.515316   39984 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 11:57:31.530420   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.530480   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.538594   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.738692   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.738802   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.747924   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:31.940764   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:31.940962   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:31.953388   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.138925   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.139021   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.150641   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.339007   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.339144   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.350071   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.538785   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.538883   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.549429   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.740773   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.740914   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.751283   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:32.940779   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:32.940965   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:32.952319   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.139151   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.139215   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.149931   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.338763   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.338882   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.347730   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.540825   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.540989   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.551698   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.739521   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.739687   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.750188   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:33.939155   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:33.939254   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:33.949817   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.140162   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.140353   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.150863   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.340139   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.340257   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.351094   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.540169   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.540353   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.551334   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.551344   39984 api_server.go:165] Checking apiserver status ...
	I0629 11:57:34.551403   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 11:57:34.559886   39984 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.559897   39984 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 11:57:34.559905   39984 kubeadm.go:1092] stopping kube-system containers ...
	I0629 11:57:34.559958   39984 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 11:57:34.590002   39984 docker.go:434] Stopping containers: [666dcbf78fe0 ddb4a3ba17a8 6b729b461ef0 b814135cd0a1 e13a428052eb 0dd4b988196b fae1c540c6c3 4d48afea68d9 196dbfd07a20 439d99c75b27 cc212149d36c 984a7e540bed 80e09584f648 9db02521aa04 3369302f8f17 d66a49ab53be]
	I0629 11:57:34.590078   39984 ssh_runner.go:195] Run: docker stop 666dcbf78fe0 ddb4a3ba17a8 6b729b461ef0 b814135cd0a1 e13a428052eb 0dd4b988196b fae1c540c6c3 4d48afea68d9 196dbfd07a20 439d99c75b27 cc212149d36c 984a7e540bed 80e09584f648 9db02521aa04 3369302f8f17 d66a49ab53be
	I0629 11:57:34.622333   39984 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 11:57:34.633894   39984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:57:34.642013   39984 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 18:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 29 18:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jun 29 18:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 29 18:56 /etc/kubernetes/scheduler.conf
	
	I0629 11:57:34.642067   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 11:57:34.650335   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 11:57:34.658274   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 11:57:34.666006   39984 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.666067   39984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 11:57:34.674854   39984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 11:57:34.682511   39984 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:57:34.682565   39984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 11:57:34.689948   39984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 11:57:34.697944   39984 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 11:57:34.697960   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:34.743910   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.702128   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.884195   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.931141   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:35.978909   39984 api_server.go:51] waiting for apiserver process to appear ...
	I0629 11:57:35.978974   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:36.489509   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:36.991297   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:37.491468   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:57:37.539412   39984 api_server.go:71] duration metric: took 1.560450953s to wait for apiserver process to appear ...
	I0629 11:57:37.539430   39984 api_server.go:87] waiting for apiserver healthz status ...
	I0629 11:57:37.539444   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:40.290730   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 11:57:40.290748   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 11:57:40.792942   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:40.800561   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:57:40.800574   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:57:41.291032   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:41.296338   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 11:57:41.296358   39984 api_server.go:102] status: https://127.0.0.1:60815/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 11:57:41.791011   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 11:57:41.797671   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 200:
	ok
	I0629 11:57:41.804473   39984 api_server.go:140] control plane version: v1.24.2
	I0629 11:57:41.804485   39984 api_server.go:130] duration metric: took 4.264923117s to wait for apiserver health ...
	I0629 11:57:41.804492   39984 cni.go:95] Creating CNI manager for ""
	I0629 11:57:41.804502   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 11:57:41.804513   39984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 11:57:41.832519   39984 system_pods.go:59] 8 kube-system pods found
	I0629 11:57:41.832535   39984 system_pods.go:61] "coredns-6d4b75cb6d-pnzfc" [d1c86d77-1548-4a2f-b9c7-42b4bf4a6a3d] Running
	I0629 11:57:41.832541   39984 system_pods.go:61] "etcd-embed-certs-20220629115611-24356" [d91824a5-2512-44b7-82ef-0fa1347aaabf] Running
	I0629 11:57:41.832547   39984 system_pods.go:61] "kube-apiserver-embed-certs-20220629115611-24356" [da634837-5c4e-4f9f-9a67-2cc008c0440b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 11:57:41.832553   39984 system_pods.go:61] "kube-controller-manager-embed-certs-20220629115611-24356" [52be6bd2-1731-4717-bc8a-e66fd7626c22] Running
	I0629 11:57:41.832556   39984 system_pods.go:61] "kube-proxy-pcxgq" [27e07fcd-c6b6-438e-a098-a226b21b33e1] Running
	I0629 11:57:41.832561   39984 system_pods.go:61] "kube-scheduler-embed-certs-20220629115611-24356" [09df9d02-46aa-44bc-afe4-b16bcd31afd0] Running
	I0629 11:57:41.832566   39984 system_pods.go:61] "metrics-server-5c6f97fb75-rxdvx" [f03ad7f1-c31c-4563-a988-6b36ea877e9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 11:57:41.832573   39984 system_pods.go:61] "storage-provisioner" [941d4d53-8827-455c-bf13-eccd87cfbfe5] Running
	I0629 11:57:41.832577   39984 system_pods.go:74] duration metric: took 28.058937ms to wait for pod list to return data ...
	I0629 11:57:41.832583   39984 node_conditions.go:102] verifying NodePressure condition ...
	I0629 11:57:41.835565   39984 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 11:57:41.835583   39984 node_conditions.go:123] node cpu capacity is 6
	I0629 11:57:41.835591   39984 node_conditions.go:105] duration metric: took 3.005124ms to run NodePressure ...
	I0629 11:57:41.835602   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 11:57:42.037431   39984 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 11:57:42.043980   39984 kubeadm.go:777] kubelet initialised
	I0629 11:57:42.043992   39984 kubeadm.go:778] duration metric: took 6.540999ms waiting for restarted kubelet to initialise ...
	I0629 11:57:42.044000   39984 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 11:57:42.050820   39984 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.056213   39984 pod_ready.go:92] pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:42.056222   39984 pod_ready.go:81] duration metric: took 5.36795ms waiting for pod "coredns-6d4b75cb6d-pnzfc" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.056229   39984 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.061951   39984 pod_ready.go:92] pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:42.061961   39984 pod_ready.go:81] duration metric: took 5.728041ms waiting for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:42.061968   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:44.073865   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:46.077904   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:48.576009   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:51.075775   39984 pod_ready.go:102] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:53.075358   39984 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.075371   39984 pod_ready.go:81] duration metric: took 11.01306776s waiting for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.075377   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.079816   39984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.079824   39984 pod_ready.go:81] duration metric: took 4.442048ms waiting for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.079829   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pcxgq" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.084576   39984 pod_ready.go:92] pod "kube-proxy-pcxgq" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.084583   39984 pod_ready.go:81] duration metric: took 4.749511ms waiting for pod "kube-proxy-pcxgq" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.084589   39984 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.088625   39984 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 11:57:53.088632   39984 pod_ready.go:81] duration metric: took 4.039623ms waiting for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:53.088640   39984 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace to be "Ready" ...
	I0629 11:57:55.097461   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:57.100786   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:57:59.601286   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:02.099451   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:04.600718   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:07.101221   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:09.600874   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:12.099278   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:14.601619   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:17.101075   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:19.102702   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:21.600733   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:24.099200   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:26.102268   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:28.599567   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:30.599655   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:32.599970   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:35.101359   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:37.600978   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:40.101820   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:42.601212   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:45.099127   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:47.100293   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:49.101795   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:51.600853   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:54.099798   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:56.102348   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:58:58.599972   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:00.602127   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:03.099999   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:05.602102   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	W0629 11:59:09.269281   39321 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0629 11:59:09.269312   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0629 11:59:09.691823   39321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:59:09.701755   39321 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 11:59:09.701805   39321 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 11:59:09.709759   39321 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 11:59:09.709777   39321 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 11:59:10.453324   39321 out.go:204]   - Generating certificates and keys ...
	I0629 11:59:08.103868   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:10.600504   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:13.100908   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:15.103349   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:11.075112   39321 out.go:204]   - Booting up control plane ...
	I0629 11:59:17.600597   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:19.602441   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:22.101027   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:24.601921   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:27.102740   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:29.103218   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:31.602024   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:33.603482   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:36.104291   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:38.601027   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:40.602533   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:42.604039   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:45.105214   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:47.603677   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:49.606151   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:52.104004   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:54.106224   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:56.605130   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 11:59:58.606838   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:01.105420   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:03.107040   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:05.605975   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:07.607176   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:09.607415   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:12.108174   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:14.607016   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:16.608058   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:18.608278   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:21.108388   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:23.110530   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:25.609089   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:27.610444   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:30.108624   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:32.109598   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:34.613349   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:37.108006   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:39.109710   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:41.608341   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:43.610410   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:46.106908   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:48.108652   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:50.608608   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:52.609008   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:55.109271   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:00:57.610864   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:00.109777   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:02.109951   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:04.110413   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:06.018998   39321 kubeadm.go:397] StartCluster complete in 7m59.760603139s
	I0629 12:01:06.019078   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0629 12:01:06.047361   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.083489   39321 logs.go:276] No container was found matching "kube-apiserver"
	I0629 12:01:06.083580   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0629 12:01:06.118045   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.118058   39321 logs.go:276] No container was found matching "etcd"
	I0629 12:01:06.118119   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0629 12:01:06.148512   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.148524   39321 logs.go:276] No container was found matching "coredns"
	I0629 12:01:06.148587   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0629 12:01:06.177707   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.177719   39321 logs.go:276] No container was found matching "kube-scheduler"
	I0629 12:01:06.177776   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0629 12:01:06.210822   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.210835   39321 logs.go:276] No container was found matching "kube-proxy"
	I0629 12:01:06.210895   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0629 12:01:06.243800   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.243812   39321 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0629 12:01:06.243868   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0629 12:01:06.274291   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.274305   39321 logs.go:276] No container was found matching "storage-provisioner"
	I0629 12:01:06.274368   39321 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0629 12:01:06.308104   39321 logs.go:274] 0 containers: []
	W0629 12:01:06.308119   39321 logs.go:276] No container was found matching "kube-controller-manager"
	I0629 12:01:06.308126   39321 logs.go:123] Gathering logs for kubelet ...
	I0629 12:01:06.308133   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0629 12:01:06.347949   39321 logs.go:123] Gathering logs for dmesg ...
	I0629 12:01:06.347968   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0629 12:01:06.361249   39321 logs.go:123] Gathering logs for describe nodes ...
	I0629 12:01:06.361264   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0629 12:01:06.413780   39321 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0629 12:01:06.413793   39321 logs.go:123] Gathering logs for Docker ...
	I0629 12:01:06.413800   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0629 12:01:06.427622   39321 logs.go:123] Gathering logs for container status ...
	I0629 12:01:06.427633   39321 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0629 12:01:08.487011   39321 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059302402s)
	W0629 12:01:08.487125   39321 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0629 12:01:08.487150   39321 out.go:239] * 
	W0629 12:01:08.487259   39321 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.487274   39321 out.go:239] * 
	W0629 12:01:08.487946   39321 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0629 12:01:08.550616   39321 out.go:177] 
	W0629 12:01:08.592802   39321 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0629 12:01:08.592939   39321 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0629 12:01:08.593004   39321 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0629 12:01:08.634371   39321 out.go:177] 
	I0629 12:01:06.612352   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:09.109064   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:11.110458   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:13.611037   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:16.110193   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:18.610846   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:21.112357   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:23.610136   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:25.612152   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:28.111145   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:30.609416   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:32.611545   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:35.110917   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:37.111203   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:39.611000   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:41.618644   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:44.111165   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:46.612134   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:49.111541   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:51.611950   39984 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace has status "Ready":"False"
	I0629 12:01:53.104400   39984 pod_ready.go:81] duration metric: took 4m0.003882789s waiting for pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace to be "Ready" ...
	E0629 12:01:53.104484   39984 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-rxdvx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 12:01:53.104529   39984 pod_ready.go:38] duration metric: took 4m11.048321541s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:01:53.104568   39984 kubeadm.go:630] restartCluster took 4m21.657198964s
	W0629 12:01:53.104718   39984 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 12:01:53.104746   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0629 12:01:55.470034   39984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.365199161s)
	I0629 12:01:55.470094   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:01:55.480295   39984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:01:55.488199   39984 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 12:01:55.488247   39984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:01:55.495381   39984 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 12:01:55.495403   39984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 12:01:55.784055   39984 out.go:204]   - Generating certificates and keys ...
	I0629 12:01:56.585517   39984 out.go:204]   - Booting up control plane ...
	I0629 12:02:03.144758   39984 out.go:204]   - Configuring RBAC rules ...
	I0629 12:02:03.522675   39984 cni.go:95] Creating CNI manager for ""
	I0629 12:02:03.522687   39984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:02:03.522702   39984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:02:03.522795   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:03.522801   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=embed-certs-20220629115611-24356 minikube.k8s.io/updated_at=2022_06_29T12_02_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:03.661591   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:03.661594   39984 ops.go:34] apiserver oom_adj: -16
	I0629 12:02:04.218161   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:04.718171   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:05.217246   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:05.717318   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:06.218501   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:06.718570   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:07.216723   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:07.717314   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:08.218695   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:08.718739   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:09.216580   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:09.718244   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:10.216838   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:10.716640   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:11.216892   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:11.716821   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:12.217178   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:12.716857   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:13.218711   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:13.716758   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:14.218899   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:14.718977   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:15.217017   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:15.718977   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:16.216819   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:16.717000   39984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:02:16.789950   39984 kubeadm.go:1045] duration metric: took 13.266821249s to wait for elevateKubeSystemPrivileges.
	I0629 12:02:16.789969   39984 kubeadm.go:397] StartCluster complete in 4m45.378983921s
	I0629 12:02:16.789985   39984 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:02:16.790067   39984 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:02:16.790800   39984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:02:17.305700   39984 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220629115611-24356" rescaled to 1
	I0629 12:02:17.305741   39984 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:02:17.305750   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:02:17.305782   39984 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:02:17.305909   39984 config.go:178] Loaded profile config "embed-certs-20220629115611-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:02:17.329199   39984 out.go:177] * Verifying Kubernetes components...
	I0629 12:02:17.329263   39984 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220629115611-24356"
	I0629 12:02:17.329270   39984 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220629115611-24356"
	I0629 12:02:17.387378   39984 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220629115611-24356"
	I0629 12:02:17.387386   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:02:17.329275   39984 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220629115611-24356"
	W0629 12:02:17.387414   39984 addons.go:162] addon metrics-server should already be in state true
	I0629 12:02:17.387427   39984 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220629115611-24356"
	I0629 12:02:17.329281   39984 addons.go:65] Setting dashboard=true in profile "embed-certs-20220629115611-24356"
	W0629 12:02:17.387455   39984 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:02:17.387480   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.387480   39984 addons.go:153] Setting addon dashboard=true in "embed-certs-20220629115611-24356"
	I0629 12:02:17.387475   39984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220629115611-24356"
	W0629 12:02:17.387500   39984 addons.go:162] addon dashboard should already be in state true
	I0629 12:02:17.387557   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.387573   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.388046   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.388231   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.389797   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.392847   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.402122   39984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 12:02:17.443969   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.544262   39984 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0629 12:02:17.545082   39984 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220629115611-24356"
	W0629 12:02:17.581288   39984 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:02:17.618472   39984 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:02:17.640175   39984 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0629 12:02:17.640200   39984 host.go:66] Checking if "embed-certs-20220629115611-24356" exists ...
	I0629 12:02:17.661256   39984 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:02:17.682381   39984 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:02:17.724466   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:02:17.682806   39984 cli_runner.go:164] Run: docker container inspect embed-certs-20220629115611-24356 --format={{.State.Status}}
	I0629 12:02:17.724505   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:02:17.724534   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:02:17.703153   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:02:17.724565   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:02:17.724601   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.724690   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.724709   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.729992   39984 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220629115611-24356" to be "Ready" ...
	I0629 12:02:17.753250   39984 node_ready.go:49] node "embed-certs-20220629115611-24356" has status "Ready":"True"
	I0629 12:02:17.753270   39984 node_ready.go:38] duration metric: took 23.142427ms waiting for node "embed-certs-20220629115611-24356" to be "Ready" ...
	I0629 12:02:17.753280   39984 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:02:17.761227   39984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:17.834816   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.835163   39984 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:02:17.835173   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:02:17.835233   39984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220629115611-24356
	I0629 12:02:17.835920   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.837624   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.918047   39984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60811 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/embed-certs-20220629115611-24356/id_rsa Username:docker}
	I0629 12:02:17.967072   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:02:17.967089   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:02:17.976382   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:02:18.044346   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:02:18.044362   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:02:18.082048   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:02:18.082062   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:02:18.146371   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:02:18.146455   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:02:18.179040   39984 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:02:18.179047   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:02:18.179057   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:02:18.246519   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:02:18.246537   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:02:18.345640   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:02:18.353148   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:02:18.353160   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:02:18.454046   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:02:18.454069   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:02:18.472492   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:02:18.472505   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:02:18.547601   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:02:18.547613   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:02:18.578632   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:02:18.578647   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:02:18.648142   39984 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:02:18.648163   39984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:02:18.681571   39984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:02:18.750483   39984 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.348276451s)
	I0629 12:02:18.750500   39984 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 12:02:18.888795   39984 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220629115611-24356"
	I0629 12:02:19.589385   39984 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0629 12:02:19.648351   39984 addons.go:414] enableAddons completed in 2.342474185s
	I0629 12:02:19.777473   39984 pod_ready.go:102] pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace has status "Ready":"False"
	I0629 12:02:21.779635   39984 pod_ready.go:102] pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace has status "Ready":"False"
	I0629 12:02:22.776738   39984 pod_ready.go:92] pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.776752   39984 pod_ready.go:81] duration metric: took 5.015355158s waiting for pod "coredns-6d4b75cb6d-4bfwq" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.776758   39984 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-689nj" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.781084   39984 pod_ready.go:92] pod "coredns-6d4b75cb6d-689nj" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.781092   39984 pod_ready.go:81] duration metric: took 4.329231ms waiting for pod "coredns-6d4b75cb6d-689nj" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.781098   39984 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.784942   39984 pod_ready.go:92] pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.784949   39984 pod_ready.go:81] duration metric: took 3.847521ms waiting for pod "etcd-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.784955   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.788894   39984 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.788903   39984 pod_ready.go:81] duration metric: took 3.933089ms waiting for pod "kube-apiserver-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.788909   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.792968   39984 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:22.792976   39984 pod_ready.go:81] duration metric: took 4.054757ms waiting for pod "kube-controller-manager-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:22.792982   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9whjc" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.174612   39984 pod_ready.go:92] pod "kube-proxy-9whjc" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:23.174622   39984 pod_ready.go:81] duration metric: took 381.624505ms waiting for pod "kube-proxy-9whjc" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.174628   39984 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.574939   39984 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:02:23.574948   39984 pod_ready.go:81] duration metric: took 400.303754ms waiting for pod "kube-scheduler-embed-certs-20220629115611-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:02:23.574954   39984 pod_ready.go:38] duration metric: took 5.821490673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:02:23.574966   39984 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:02:23.575014   39984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:02:23.584594   39984 api_server.go:71] duration metric: took 6.278645942s to wait for apiserver process to appear ...
	I0629 12:02:23.584605   39984 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:02:23.584614   39984 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60815/healthz ...
	I0629 12:02:23.589756   39984 api_server.go:266] https://127.0.0.1:60815/healthz returned 200:
	ok
	I0629 12:02:23.590804   39984 api_server.go:140] control plane version: v1.24.2
	I0629 12:02:23.590813   39984 api_server.go:130] duration metric: took 6.203753ms to wait for apiserver health ...
	I0629 12:02:23.590818   39984 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:02:23.777474   39984 system_pods.go:59] 9 kube-system pods found
	I0629 12:02:23.777488   39984 system_pods.go:61] "coredns-6d4b75cb6d-4bfwq" [9ea6d67d-f471-4bb3-9201-579f2d373e85] Running
	I0629 12:02:23.777492   39984 system_pods.go:61] "coredns-6d4b75cb6d-689nj" [23db562d-ab6b-4c56-8d94-31aea6542072] Running
	I0629 12:02:23.777495   39984 system_pods.go:61] "etcd-embed-certs-20220629115611-24356" [54618f39-914f-4ec2-9df9-a250f11c9a2c] Running
	I0629 12:02:23.777512   39984 system_pods.go:61] "kube-apiserver-embed-certs-20220629115611-24356" [3907cf9f-b479-4990-a2bb-00926370ca98] Running
	I0629 12:02:23.777519   39984 system_pods.go:61] "kube-controller-manager-embed-certs-20220629115611-24356" [5fc7c4d6-5c8c-40b7-a170-12edad850417] Running
	I0629 12:02:23.777524   39984 system_pods.go:61] "kube-proxy-9whjc" [a127008e-42de-4155-a698-e83602edb663] Running
	I0629 12:02:23.777527   39984 system_pods.go:61] "kube-scheduler-embed-certs-20220629115611-24356" [35f4ef5a-3772-4f5e-836b-8feaebdadb30] Running
	I0629 12:02:23.777532   39984 system_pods.go:61] "metrics-server-5c6f97fb75-plpnv" [af632ef8-e7ac-46ee-b7a0-3552276f17e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:02:23.777536   39984 system_pods.go:61] "storage-provisioner" [4c55837a-95e7-48e8-a535-c3dcd1a36389] Running
	I0629 12:02:23.777540   39984 system_pods.go:74] duration metric: took 186.713608ms to wait for pod list to return data ...
	I0629 12:02:23.777545   39984 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:02:23.975562   39984 default_sa.go:45] found service account: "default"
	I0629 12:02:23.975577   39984 default_sa.go:55] duration metric: took 198.02077ms for default service account to be created ...
	I0629 12:02:23.975583   39984 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 12:02:24.177955   39984 system_pods.go:86] 8 kube-system pods found
	I0629 12:02:24.177971   39984 system_pods.go:89] "coredns-6d4b75cb6d-4bfwq" [9ea6d67d-f471-4bb3-9201-579f2d373e85] Running
	I0629 12:02:24.177976   39984 system_pods.go:89] "etcd-embed-certs-20220629115611-24356" [54618f39-914f-4ec2-9df9-a250f11c9a2c] Running
	I0629 12:02:24.177995   39984 system_pods.go:89] "kube-apiserver-embed-certs-20220629115611-24356" [3907cf9f-b479-4990-a2bb-00926370ca98] Running
	I0629 12:02:24.178003   39984 system_pods.go:89] "kube-controller-manager-embed-certs-20220629115611-24356" [5fc7c4d6-5c8c-40b7-a170-12edad850417] Running
	I0629 12:02:24.178008   39984 system_pods.go:89] "kube-proxy-9whjc" [a127008e-42de-4155-a698-e83602edb663] Running
	I0629 12:02:24.178012   39984 system_pods.go:89] "kube-scheduler-embed-certs-20220629115611-24356" [35f4ef5a-3772-4f5e-836b-8feaebdadb30] Running
	I0629 12:02:24.178021   39984 system_pods.go:89] "metrics-server-5c6f97fb75-plpnv" [af632ef8-e7ac-46ee-b7a0-3552276f17e9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:02:24.178025   39984 system_pods.go:89] "storage-provisioner" [4c55837a-95e7-48e8-a535-c3dcd1a36389] Running
	I0629 12:02:24.178034   39984 system_pods.go:126] duration metric: took 202.438161ms to wait for k8s-apps to be running ...
	I0629 12:02:24.178039   39984 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 12:02:24.178092   39984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:02:24.187986   39984 system_svc.go:56] duration metric: took 9.942208ms WaitForService to wait for kubelet.
	I0629 12:02:24.187999   39984 kubeadm.go:572] duration metric: took 6.882034131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 12:02:24.188014   39984 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:02:24.373750   39984 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:02:24.373764   39984 node_conditions.go:123] node cpu capacity is 6
	I0629 12:02:24.373770   39984 node_conditions.go:105] duration metric: took 185.747482ms to run NodePressure ...
	I0629 12:02:24.373781   39984 start.go:213] waiting for startup goroutines ...
	I0629 12:02:24.406628   39984 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:02:24.428518   39984 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220629115611-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:57:27 UTC, end at Wed 2022-06-29 19:03:29 UTC. --
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.686258850Z" level=info msg="ignoring event" container=95c1df9a1a6ba6a12b9cb98ec2fa3176fe8ea24f7e09c64653ee7ad6e55283ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.755146311Z" level=info msg="ignoring event" container=88a08097920d3e73aed93851f84963df312ee582664b477d8166eb3c17d3f96d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.880985456Z" level=info msg="ignoring event" container=b9125da07096037dcfb4169fefd294375762310eaf048649823e62539d960fde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.943055336Z" level=info msg="ignoring event" container=8f01f79963fb4dc2ae009f24b771d96c97d216851a0fd85da9edfc69202daf1c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:54 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:54.997050114Z" level=info msg="ignoring event" container=3193b00d499be3b9a792ecd0cd7f6d32d625701f749046b1f97ace72db3188d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:55 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:55.062688681Z" level=info msg="ignoring event" container=d60146d2972f6dd6062eadda047abbb34545b63d33a5251582617cc87c9cd836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:01:55 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:01:55.147198478Z" level=info msg="ignoring event" container=853ad8f1abb5efa793c0ef8da991c8b11c9516bad27d847f866a6caeb012a8ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:19 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:19.839611088Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:19 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:19.839700758Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:19 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:19.840824717Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:22 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:22.204960248Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:02:22 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:22.896348617Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:02:23 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:23.197208235Z" level=info msg="ignoring event" container=d4907e1b916f279140de583694aa93b58711450056410b641d5b42dd6bdaf036 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:23 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:23.244818769Z" level=info msg="ignoring event" container=73b9e1fe7949a8bcc52a86b2267f357847ac00074ffdebff6f1053f516a9ac99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:28 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:28.371373191Z" level=info msg="ignoring event" container=d947c4d626d5b838dbe1032f278e1ff0bafc3033f80442457b8e29730d378c9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:28 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:28.413306022Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 19:02:29 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:29.268936321Z" level=info msg="ignoring event" container=27d19a135b91affbcc9966b5c5da10f66bc519d33861c1c99916f518bc04d89b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:02:33 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:33.358543495Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:33 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:33.358648626Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:33 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:33.454775180Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:02:47 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:02:47.148580207Z" level=info msg="ignoring event" container=de7b046723782bfee336a6ac80f1646f3a101a4e6dccc317099232c6073a425c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:03:26.260868942Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:03:26.260892535Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:03:26.284441322Z" level=info msg="ignoring event" container=7fc136d61ce7751c6056a37eff3d56e0a076da300c73616b5d9f4bc7a578dec3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 dockerd[494]: time="2022-06-29T19:03:26.309512248Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	7fc136d61ce77       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   3                   72448c285b667
	bb35f1a1cbfe6       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   56 seconds ago       Running             kubernetes-dashboard        0                   07593a17b1782
	102e7e31fe20b       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   a5521e8c4785d
	e78d061ffbf60       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   9af798338e9fe
	e456f8380e066       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   3e8d13bde5534
	19002a7796106       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   fd08f90e169f7
	0fc0a18250b47       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   e221f2a8d00a4
	ff2d33804dec1       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   55361e8c8398c
	11df91bcbbca9       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   b9862ec2f987f
	
	* 
	* ==> coredns [e78d061ffbf6] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220629115611-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220629115611-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=embed-certs-20220629115611-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T12_02_03_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220629115611-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 19:03:22 +0000   Wed, 29 Jun 2022 19:02:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220629115611-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                762c4854-29ab-4ef1-b3c6-183c64d29e4d
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4bfwq                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     72s
	  kube-system                 etcd-embed-certs-20220629115611-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         87s
	  kube-system                 kube-apiserver-embed-certs-20220629115611-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-embed-certs-20220629115611-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-9whjc                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-scheduler-embed-certs-20220629115611-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 metrics-server-5c6f97fb75-plpnv                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-5tqfn                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-9qp4w                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 72s   kube-proxy       
	  Normal  Starting                 86s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  86s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s   kubelet          Node embed-certs-20220629115611-24356 status is now: NodeReady
	  Normal  RegisteredNode           73s   node-controller  Node embed-certs-20220629115611-24356 event: Registered Node embed-certs-20220629115611-24356 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node embed-certs-20220629115611-24356 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [11df91bcbbca] <==
	* {"level":"info","ts":"2022-06-29T19:01:57.920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T19:01:57.920Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:01:57.921Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220629115611-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:01:58.174Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:01:58.175Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:01:58.176Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:01:58.176Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T19:01:58.176Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T19:01:58.184Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:01:58.186Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:01:58.186Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:01:58.186Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  19:03:29 up  1:11,  0 users,  load average: 0.73, 1.22, 1.35
	Linux embed-certs-20220629115611-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [0fc0a18250b4] <==
	* I0629 19:02:02.778315       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 19:02:03.354996       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 19:02:03.360642       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0629 19:02:03.367973       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 19:02:03.446585       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 19:02:16.725027       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:02:16.776119       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0629 19:02:17.498436       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 19:02:18.895557       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.111.153.97]
	I0629 19:02:19.509467       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.100.32.89]
	I0629 19:02:19.573604       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.33.19]
	W0629 19:02:19.860636       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:02:19.860675       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:02:19.860688       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:02:19.860759       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:02:19.860892       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:02:19.862353       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:03:21.823512       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:03:21.823549       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:03:21.823556       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:03:21.834549       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:03:21.834592       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:03:21.834599       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [ff2d33804dec] <==
	* I0629 19:02:17.048254       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-689nj"
	I0629 19:02:18.790767       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 19:02:18.794513       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0629 19:02:18.798395       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0629 19:02:18.803603       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-plpnv"
	I0629 19:02:19.404662       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 19:02:19.408057       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:02:19.410218       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0629 19:02:19.411974       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.413307       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:02:19.416046       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:02:19.416422       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:02:19.418334       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:02:19.425479       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.425680       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:02:19.426776       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.426860       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:02:19.428239       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:02:19.428284       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:02:19.463518       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-5tqfn"
	I0629 19:02:19.463551       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-9qp4w"
	E0629 19:02:46.220184       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:02:46.634461       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 19:03:22.070098       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:03:22.121105       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e456f8380e06] <==
	* I0629 19:02:17.409283       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:02:17.409366       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:02:17.409453       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:02:17.489344       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:02:17.489451       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:02:17.489464       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:02:17.489480       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:02:17.489505       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:02:17.489620       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:02:17.489931       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:02:17.489973       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:02:17.490623       1 config.go:317] "Starting service config controller"
	I0629 19:02:17.490665       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:02:17.490721       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:02:17.490786       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:02:17.491458       1 config.go:444] "Starting node config controller"
	I0629 19:02:17.491467       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:02:17.590807       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:02:17.590917       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 19:02:17.592071       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [19002a779610] <==
	* W0629 19:02:00.688563       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 19:02:00.688598       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 19:02:00.688683       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0629 19:02:00.688716       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0629 19:02:00.688725       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 19:02:00.688735       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 19:02:00.688992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0629 19:02:00.689026       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0629 19:02:00.689180       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 19:02:00.689208       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 19:02:00.689772       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:02:00.689890       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:02:00.689910       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0629 19:02:00.689922       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0629 19:02:00.690025       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:02:00.690587       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:02:00.690273       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0629 19:02:00.690825       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0629 19:02:00.690316       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0629 19:02:00.690879       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0629 19:02:01.757107       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:02:01.757143       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:02:01.758805       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 19:02:01.758869       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0629 19:02:01.986939       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:57:27 UTC, end at Wed 2022-06-29 19:03:30 UTC. --
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529683    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a127008e-42de-4155-a698-e83602edb663-xtables-lock\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529700    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kmx\" (UniqueName: \"kubernetes.io/projected/af632ef8-e7ac-46ee-b7a0-3552276f17e9-kube-api-access-96kmx\") pod \"metrics-server-5c6f97fb75-plpnv\" (UID: \"af632ef8-e7ac-46ee-b7a0-3552276f17e9\") " pod="kube-system/metrics-server-5c6f97fb75-plpnv"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529918    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmd6\" (UniqueName: \"kubernetes.io/projected/9ea6d67d-f471-4bb3-9201-579f2d373e85-kube-api-access-cqmd6\") pod \"coredns-6d4b75cb6d-4bfwq\" (UID: \"9ea6d67d-f471-4bb3-9201-579f2d373e85\") " pod="kube-system/coredns-6d4b75cb6d-4bfwq"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.529940    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2e8b31a8-de1f-45db-90b7-8d4b00453b5b-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-9qp4w\" (UID: \"2e8b31a8-de1f-45db-90b7-8d4b00453b5b\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-9qp4w"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530134    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a127008e-42de-4155-a698-e83602edb663-kube-proxy\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530153    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a127008e-42de-4155-a698-e83602edb663-lib-modules\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530168    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4c7\" (UniqueName: \"kubernetes.io/projected/a127008e-42de-4155-a698-e83602edb663-kube-api-access-ct4c7\") pod \"kube-proxy-9whjc\" (UID: \"a127008e-42de-4155-a698-e83602edb663\") " pod="kube-system/kube-proxy-9whjc"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530184    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af632ef8-e7ac-46ee-b7a0-3552276f17e9-tmp-dir\") pod \"metrics-server-5c6f97fb75-plpnv\" (UID: \"af632ef8-e7ac-46ee-b7a0-3552276f17e9\") " pod="kube-system/metrics-server-5c6f97fb75-plpnv"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530198    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/719c4863-f095-450d-bdbf-445aa7750857-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-5tqfn\" (UID: \"719c4863-f095-450d-bdbf-445aa7750857\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-5tqfn"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530474    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jwww\" (UniqueName: \"kubernetes.io/projected/2e8b31a8-de1f-45db-90b7-8d4b00453b5b-kube-api-access-5jwww\") pod \"kubernetes-dashboard-5fd5574d9f-9qp4w\" (UID: \"2e8b31a8-de1f-45db-90b7-8d4b00453b5b\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-9qp4w"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530533    9859 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ea6d67d-f471-4bb3-9201-579f2d373e85-config-volume\") pod \"coredns-6d4b75cb6d-4bfwq\" (UID: \"9ea6d67d-f471-4bb3-9201-579f2d373e85\") " pod="kube-system/coredns-6d4b75cb6d-4bfwq"
	Jun 29 19:03:23 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:23.530547    9859 reconciler.go:157] "Reconciler: start to sync state"
	Jun 29 19:03:24 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:24.694943    9859 request.go:601] Waited for 1.153673276s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 29 19:03:24 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:24.699782    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220629115611-24356\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220629115611-24356"
	Jun 29 19:03:24 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:24.878783    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220629115611-24356\" already exists" pod="kube-system/etcd-embed-certs-20220629115611-24356"
	Jun 29 19:03:25 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:25.084614    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-embed-certs-20220629115611-24356\" already exists" pod="kube-system/kube-controller-manager-embed-certs-20220629115611-24356"
	Jun 29 19:03:25 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:25.355398    9859 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220629115611-24356\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220629115611-24356"
	Jun 29 19:03:25 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:25.877911    9859 scope.go:110] "RemoveContainer" containerID="de7b046723782bfee336a6ac80f1646f3a101a4e6dccc317099232c6073a425c"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310530    9859 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310588    9859 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310706    9859 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-96kmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-plpnv_kube-system(af632ef8-e7ac-46ee-b7a0-3552276f17e9): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.310756    9859 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-plpnv" podUID=af632ef8-e7ac-46ee-b7a0-3552276f17e9
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:26.541980    9859 scope.go:110] "RemoveContainer" containerID="de7b046723782bfee336a6ac80f1646f3a101a4e6dccc317099232c6073a425c"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: I0629 19:03:26.542626    9859 scope.go:110] "RemoveContainer" containerID="7fc136d61ce7751c6056a37eff3d56e0a076da300c73616b5d9f4bc7a578dec3"
	Jun 29 19:03:26 embed-certs-20220629115611-24356 kubelet[9859]: E0629 19:03:26.542818    9859 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-5tqfn_kubernetes-dashboard(719c4863-f095-450d-bdbf-445aa7750857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-5tqfn" podUID=719c4863-f095-450d-bdbf-445aa7750857
	
	* 
	* ==> kubernetes-dashboard [bb35f1a1cbfe] <==
	* 2022/06/29 19:02:33 Using namespace: kubernetes-dashboard
	2022/06/29 19:02:33 Using in-cluster config to connect to apiserver
	2022/06/29 19:02:33 Using secret token for csrf signing
	2022/06/29 19:02:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 19:02:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 19:02:33 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 19:02:33 Generating JWE encryption key
	2022/06/29 19:02:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 19:02:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 19:02:33 Initializing JWE encryption key from synchronized object
	2022/06/29 19:02:33 Creating in-cluster Sidecar client
	2022/06/29 19:02:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:02:33 Serving insecurely on HTTP port: 9090
	2022/06/29 19:03:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:02:33 Starting overwatch
	
	* 
	* ==> storage-provisioner [102e7e31fe20] <==
	* I0629 19:02:19.797530       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:02:19.807242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:02:19.807312       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:02:19.814340       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:02:19.814480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220629115611-24356_7a156cbb-c819-42e7-8200-404bba168a92!
	I0629 19:02:19.814773       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1f2f454-8c35-4f18-b5aa-3ee51954718a", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220629115611-24356_7a156cbb-c819-42e7-8200-404bba168a92 became leader
	I0629 19:02:19.914865       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220629115611-24356_7a156cbb-c819-42e7-8200-404bba168a92!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-plpnv
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 describe pod metrics-server-5c6f97fb75-plpnv
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220629115611-24356 describe pod metrics-server-5c6f97fb75-plpnv: exit status 1 (305.91639ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-plpnv" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220629115611-24356 describe pod metrics-server-5c6f97fb75-plpnv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220629120335-24356 --alsologtostderr -v=1
E0629 12:10:47.046014   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356: exit status 2 (16.102443035s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
E0629 12:11:07.885302   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356: exit status 2 (16.1061999s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220629120335-24356 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220629120335-24356 --alsologtostderr -v=1: (1.001922421s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220629120335-24356
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220629120335-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63",
	        "Created": "2022-06-29T19:03:42.606358049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T19:05:25.980383973Z",
	            "FinishedAt": "2022-06-29T19:05:24.073952813Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/hosts",
	        "LogPath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63-json.log",
	        "Name": "/default-k8s-different-port-20220629120335-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220629120335-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220629120335-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220629120335-24356",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220629120335-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220629120335-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220629120335-24356",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220629120335-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "266ce96f2e686f18200d6d605b579b4dbedf7dd94d5b65d64af1ee9a8b4fe204",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61600"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61601"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61602"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61603"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61604"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/266ce96f2e68",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220629120335-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1ed0e6ce6fe4",
	                        "default-k8s-different-port-20220629120335-24356"
	                    ],
	                    "NetworkID": "0387efa2aeb00cda0190330b61b4511178405a5af8b14254981312d43b80643e",
	                    "EndpointID": "559b6bd4b7de6d7b58462db817b7abecc9850e5812d3aeb14922334ee3b314d9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220629120335-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220629120335-24356 logs -n 25: (2.999401714s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | disable-driver-mounts-20220629120335-24356        |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:04 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 12:05:24
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 12:05:24.742130   40900 out.go:296] Setting OutFile to fd 1 ...
	I0629 12:05:24.742284   40900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:05:24.742289   40900 out.go:309] Setting ErrFile to fd 2...
	I0629 12:05:24.742293   40900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:05:24.742591   40900 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 12:05:24.742844   40900 out.go:303] Setting JSON to false
	I0629 12:05:24.757723   40900 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11092,"bootTime":1656518432,"procs":372,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 12:05:24.757833   40900 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 12:05:24.779949   40900 out.go:177] * [default-k8s-different-port-20220629120335-24356] minikube v1.26.0 on Darwin 12.4
	I0629 12:05:24.822677   40900 notify.go:193] Checking for updates...
	I0629 12:05:24.843727   40900 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 12:05:24.864447   40900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:05:24.885678   40900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 12:05:24.907000   40900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 12:05:24.928764   40900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 12:05:24.950479   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:05:24.950992   40900 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 12:05:25.019818   40900 docker.go:137] docker version: linux-20.10.16
	I0629 12:05:25.019950   40900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:05:25.141831   40900 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:05:25.07732428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:05:25.163888   40900 out.go:177] * Using the docker driver based on existing profile
	I0629 12:05:25.185202   40900 start.go:284] selected driver: docker
	I0629 12:05:25.185226   40900 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:25.185357   40900 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 12:05:25.188563   40900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:05:25.310870   40900 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:05:25.24659859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:05:25.311015   40900 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 12:05:25.311029   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:25.311037   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:25.311045   40900 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:25.354945   40900 out.go:177] * Starting control plane node default-k8s-different-port-20220629120335-24356 in cluster default-k8s-different-port-20220629120335-24356
	I0629 12:05:25.376387   40900 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 12:05:25.397604   40900 out.go:177] * Pulling base image ...
	I0629 12:05:25.439278   40900 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:05:25.439289   40900 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 12:05:25.439326   40900 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 12:05:25.439338   40900 cache.go:57] Caching tarball of preloaded images
	I0629 12:05:25.439430   40900 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 12:05:25.439443   40900 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 12:05:25.440039   40900 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/config.json ...
	I0629 12:05:25.502774   40900 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 12:05:25.502801   40900 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 12:05:25.502814   40900 cache.go:208] Successfully downloaded all kic artifacts
	I0629 12:05:25.502860   40900 start.go:352] acquiring machines lock for default-k8s-different-port-20220629120335-24356: {Name:mk60bb2ebdcfb729d9b918baeac3e57ffdf371c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 12:05:25.502941   40900 start.go:356] acquired machines lock for "default-k8s-different-port-20220629120335-24356" in 63.513µs
	I0629 12:05:25.502981   40900 start.go:94] Skipping create...Using existing machine configuration
	I0629 12:05:25.502990   40900 fix.go:55] fixHost starting: 
	I0629 12:05:25.503259   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:05:25.570445   40900 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220629120335-24356: state=Stopped err=<nil>
	W0629 12:05:25.570489   40900 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 12:05:25.612862   40900 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220629120335-24356" ...
	I0629 12:05:25.633949   40900 cli_runner.go:164] Run: docker start default-k8s-different-port-20220629120335-24356
	I0629 12:05:25.987798   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:05:26.061121   40900 kic.go:416] container "default-k8s-different-port-20220629120335-24356" state is running.
	I0629 12:05:26.061836   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.139968   40900 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/config.json ...
	I0629 12:05:26.140415   40900 machine.go:88] provisioning docker machine ...
	I0629 12:05:26.140442   40900 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220629120335-24356"
	I0629 12:05:26.140525   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.214964   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:26.215172   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:26.215190   40900 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220629120335-24356 && echo "default-k8s-different-port-20220629120335-24356" | sudo tee /etc/hostname
	I0629 12:05:26.348464   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220629120335-24356
	
	I0629 12:05:26.348558   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.425518   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:26.425668   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:26.425687   40900 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220629120335-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220629120335-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220629120335-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 12:05:26.545918   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:05:26.545942   40900 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 12:05:26.545963   40900 ubuntu.go:177] setting up certificates
	I0629 12:05:26.545973   40900 provision.go:83] configureAuth start
	I0629 12:05:26.546049   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.619306   40900 provision.go:138] copyHostCerts
	I0629 12:05:26.619394   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 12:05:26.619403   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 12:05:26.619490   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 12:05:26.619715   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 12:05:26.619724   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 12:05:26.619781   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 12:05:26.619936   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 12:05:26.619942   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 12:05:26.620000   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 12:05:26.620120   40900 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220629120335-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220629120335-24356]
	I0629 12:05:26.875537   40900 provision.go:172] copyRemoteCerts
	I0629 12:05:26.875603   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 12:05:26.875648   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.946535   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:27.033514   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 12:05:27.051758   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0629 12:05:27.069055   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 12:05:27.086527   40900 provision.go:86] duration metric: configureAuth took 540.524483ms
	I0629 12:05:27.086541   40900 ubuntu.go:193] setting minikube options for container-runtime
	I0629 12:05:27.086686   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:05:27.086764   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.159960   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.160131   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.160142   40900 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 12:05:27.278802   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 12:05:27.278816   40900 ubuntu.go:71] root file system type: overlay
	I0629 12:05:27.278968   40900 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 12:05:27.279043   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.349746   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.349897   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.349945   40900 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 12:05:27.475893   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 12:05:27.475971   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.546989   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.547153   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.547166   40900 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 12:05:27.669428   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:05:27.669447   40900 machine.go:91] provisioned docker machine in 1.528975004s
	I0629 12:05:27.669457   40900 start.go:306] post-start starting for "default-k8s-different-port-20220629120335-24356" (driver="docker")
	I0629 12:05:27.669462   40900 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 12:05:27.669535   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 12:05:27.669581   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.740351   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:27.824385   40900 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 12:05:27.827915   40900 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 12:05:27.827935   40900 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 12:05:27.827942   40900 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 12:05:27.827947   40900 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 12:05:27.827955   40900 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 12:05:27.828087   40900 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 12:05:27.828236   40900 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 12:05:27.828402   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 12:05:27.835575   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:05:27.854776   40900 start.go:309] post-start completed in 185.304144ms
	I0629 12:05:27.854863   40900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 12:05:27.854912   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.926994   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.012302   40900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 12:05:28.016583   40900 fix.go:57] fixHost completed within 2.513517141s
	I0629 12:05:28.016593   40900 start.go:81] releasing machines lock for "default-k8s-different-port-20220629120335-24356", held for 2.513569784s
	I0629 12:05:28.016680   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.088364   40900 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 12:05:28.088365   40900 ssh_runner.go:195] Run: systemctl --version
	I0629 12:05:28.088430   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.088437   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.164662   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.166354   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.248710   40900 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 12:05:28.728545   40900 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 12:05:28.728612   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 12:05:28.740680   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 12:05:28.753053   40900 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 12:05:28.822506   40900 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 12:05:28.886100   40900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:05:28.947818   40900 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 12:05:29.176842   40900 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 12:05:29.240921   40900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:05:29.307948   40900 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 12:05:29.317549   40900 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 12:05:29.317619   40900 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 12:05:29.321834   40900 start.go:468] Will wait 60s for crictl version
	I0629 12:05:29.321886   40900 ssh_runner.go:195] Run: sudo crictl version
	I0629 12:05:29.435634   40900 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 12:05:29.435699   40900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:05:29.470251   40900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:05:29.547597   40900 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 12:05:29.547772   40900 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220629120335-24356 dig +short host.docker.internal
	I0629 12:05:29.681289   40900 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 12:05:29.681400   40900 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 12:05:29.685664   40900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:05:29.695599   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:29.781285   40900 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:05:29.781347   40900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:05:29.812942   40900 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 12:05:29.812959   40900 docker.go:533] Images already preloaded, skipping extraction
	I0629 12:05:29.813043   40900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:05:29.844705   40900 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 12:05:29.844730   40900 cache_images.go:84] Images are preloaded, skipping loading
	I0629 12:05:29.844805   40900 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 12:05:29.916958   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:29.916970   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:29.916983   40900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 12:05:29.916996   40900 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220629120335-24356 NodeName:default-k8s-different-port-20220629120335-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 12:05:29.917102   40900 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220629120335-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 12:05:29.917190   40900 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220629120335-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0629 12:05:29.917247   40900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 12:05:29.924780   40900 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 12:05:29.924831   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 12:05:29.932000   40900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0629 12:05:29.944399   40900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 12:05:29.956598   40900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0629 12:05:29.968949   40900 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 12:05:29.972554   40900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:05:29.981744   40900 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356 for IP: 192.168.67.2
	I0629 12:05:29.981862   40900 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 12:05:29.981909   40900 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 12:05:29.981988   40900 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.key
	I0629 12:05:29.982046   40900 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.key.c7fa3a9e
	I0629 12:05:29.982104   40900 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.key
	I0629 12:05:29.982298   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 12:05:29.982336   40900 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 12:05:29.982348   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 12:05:29.982396   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 12:05:29.982427   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 12:05:29.982457   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 12:05:29.982526   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:05:29.983077   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 12:05:29.999906   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 12:05:30.016302   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 12:05:30.032829   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 12:05:30.049113   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 12:05:30.066680   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 12:05:30.085650   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 12:05:30.104770   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 12:05:30.122336   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 12:05:30.139889   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 12:05:30.156772   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 12:05:30.173073   40900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 12:05:30.185217   40900 ssh_runner.go:195] Run: openssl version
	I0629 12:05:30.190479   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 12:05:30.198324   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.202106   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.202144   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.207062   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 12:05:30.214124   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 12:05:30.221651   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.225365   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.225410   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.230811   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 12:05:30.238146   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 12:05:30.245876   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.249833   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.249872   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.261528   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 12:05:30.271938   40900 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-2435
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:30.272050   40900 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:05:30.300455   40900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 12:05:30.307957   40900 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 12:05:30.307974   40900 kubeadm.go:626] restartCluster start
	I0629 12:05:30.308019   40900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 12:05:30.315073   40900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.315136   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:30.387728   40900 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220629120335-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:05:30.387917   40900 kubeconfig.go:127] "default-k8s-different-port-20220629120335-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 12:05:30.388246   40900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:05:30.389575   40900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 12:05:30.397283   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.397330   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.405451   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.607595   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.607799   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.618280   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.805713   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.805781   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.814896   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.007650   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.007853   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.018553   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.205587   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.205734   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.216423   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.407635   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.407906   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.418511   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.605584   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.605644   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.615628   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.806285   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.806423   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.817288   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.005635   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.005834   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.016849   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.206682   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.206849   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.218451   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.405896   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.406007   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.416979   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.606317   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.606498   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.616827   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.805660   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.805734   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.815566   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.007709   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.007876   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.019040   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.206756   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.206924   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.218107   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.407701   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.407880   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.418775   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.418786   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.418833   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.426759   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.426770   40900 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 12:05:33.426779   40900 kubeadm.go:1092] stopping kube-system containers ...
	I0629 12:05:33.426834   40900 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:05:33.458274   40900 docker.go:434] Stopping containers: [17ccfd6d87bb f1818c465224 c1adcf1be18e cf519054c3a4 9f0b97ca9575 b425c6e78162 b2c6e14c7587 2a7a4e44fd96 d3440e6bd030 f677cfba52c7 9ba118edb0f3 55aed3b8ba56 2667b1e639dc 70e86622f020 855f6856c31f]
	I0629 12:05:33.458347   40900 ssh_runner.go:195] Run: docker stop 17ccfd6d87bb f1818c465224 c1adcf1be18e cf519054c3a4 9f0b97ca9575 b425c6e78162 b2c6e14c7587 2a7a4e44fd96 d3440e6bd030 f677cfba52c7 9ba118edb0f3 55aed3b8ba56 2667b1e639dc 70e86622f020 855f6856c31f
	I0629 12:05:33.489879   40900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 12:05:33.500322   40900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:05:33.507933   40900 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 19:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun 29 19:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jun 29 19:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun 29 19:03 /etc/kubernetes/scheduler.conf
	
	I0629 12:05:33.507980   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0629 12:05:33.515037   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0629 12:05:33.522593   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0629 12:05:33.529626   40900 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.529674   40900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 12:05:33.536295   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0629 12:05:33.543526   40900 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.543573   40900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 12:05:33.550652   40900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:05:33.557856   40900 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 12:05:33.557869   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:33.603386   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.614038   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010601206s)
	I0629 12:05:34.614052   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.784553   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.833543   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.911771   40900 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:05:34.911850   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.421616   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.921384   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.935811   40900 api_server.go:71] duration metric: took 1.024009063s to wait for apiserver process to appear ...
	I0629 12:05:35.935830   40900 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:05:35.935849   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:35.937118   40900 api_server.go:256] stopped: https://127.0.0.1:61604/healthz: Get "https://127.0.0.1:61604/healthz": EOF
	I0629 12:05:36.438094   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:39.455472   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 12:05:39.455492   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 12:05:39.937469   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:39.943847   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:05:39.943858   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:05:40.437422   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:40.444593   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:05:40.444607   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:05:40.937423   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:40.942951   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 200:
	ok
	I0629 12:05:40.949694   40900 api_server.go:140] control plane version: v1.24.2
	I0629 12:05:40.949709   40900 api_server.go:130] duration metric: took 5.0137233s to wait for apiserver health ...
	I0629 12:05:40.949717   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:40.949721   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:40.949730   40900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:05:40.956768   40900 system_pods.go:59] 8 kube-system pods found
	I0629 12:05:40.956784   40900 system_pods.go:61] "coredns-6d4b75cb6d-sr5rq" [6859dc98-d098-4a2f-b3e6-6e5b6225e930] Running
	I0629 12:05:40.956790   40900 system_pods.go:61] "etcd-default-k8s-different-port-20220629120335-24356" [4af024aa-48ac-40b0-b4c8-d05ab73ec465] Running
	I0629 12:05:40.956794   40900 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [bd9308ff-a917-4e0e-9d5c-8192ea128b2f] Running
	I0629 12:05:40.956807   40900 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [5d116566-36ba-4925-973b-c8622702e1e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 12:05:40.956811   40900 system_pods.go:61] "kube-proxy-c4lzs" [9bc1f0bb-d9c3-4809-a4b2-0f750021bad3] Running
	I0629 12:05:40.956834   40900 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [22bd5cf2-dd2c-4cb9-ad4b-8ea4c8d5772f] Running
	I0629 12:05:40.956839   40900 system_pods.go:61] "metrics-server-5c6f97fb75-rfjxz" [a1dcb333-c180-4b6b-8f3f-025a41f001b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:05:40.956843   40900 system_pods.go:61] "storage-provisioner" [5f591cc6-9b0f-4275-89e2-3096f390587d] Running
	I0629 12:05:40.956847   40900 system_pods.go:74] duration metric: took 7.112659ms to wait for pod list to return data ...
	I0629 12:05:40.956853   40900 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:05:40.959478   40900 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:05:40.959495   40900 node_conditions.go:123] node cpu capacity is 6
	I0629 12:05:40.959503   40900 node_conditions.go:105] duration metric: took 2.644447ms to run NodePressure ...
	I0629 12:05:40.959514   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:41.214716   40900 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 12:05:41.219273   40900 kubeadm.go:777] kubelet initialised
	I0629 12:05:41.219284   40900 kubeadm.go:778] duration metric: took 4.549914ms waiting for restarted kubelet to initialise ...
	I0629 12:05:41.219292   40900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:05:41.225780   40900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.231094   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.231106   40900 pod_ready.go:81] duration metric: took 5.312518ms waiting for pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.231116   40900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.238011   40900 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.238021   40900 pod_ready.go:81] duration metric: took 6.900167ms waiting for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.238028   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.243816   40900 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.243825   40900 pod_ready.go:81] duration metric: took 5.792024ms waiting for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.243832   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:43.362002   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:45.858402   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:47.859472   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:49.862061   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:51.859532   40900 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.859545   40900 pod_ready.go:81] duration metric: took 10.615389832s waiting for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.859553   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4lzs" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.864514   40900 pod_ready.go:92] pod "kube-proxy-c4lzs" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.864523   40900 pod_ready.go:81] duration metric: took 4.966121ms waiting for pod "kube-proxy-c4lzs" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.864529   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.870041   40900 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.870052   40900 pod_ready.go:81] duration metric: took 5.516262ms waiting for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.870058   40900 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:53.883004   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:55.884160   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:58.383036   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:00.384561   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:02.882051   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:04.884520   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:07.383533   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:09.882797   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:11.882979   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:13.883312   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:15.883735   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:18.385501   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:20.883564   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:22.886763   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:25.383709   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:27.386276   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:29.885692   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:32.384164   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:34.883309   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:36.884800   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:39.384855   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:41.884577   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:44.384450   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:46.885968   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:49.384678   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:51.386004   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:53.886429   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:56.384509   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:58.386257   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:00.386604   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:02.885075   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:05.385265   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:07.386268   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:09.886384   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:12.385466   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:14.887248   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:17.385034   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:19.385266   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:21.886143   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:23.886397   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:25.887289   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:28.387746   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:30.890336   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:33.385686   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:35.387141   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:37.387612   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:39.885855   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:42.386043   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:44.387585   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:46.890258   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:49.388039   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:51.884165   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:53.885975   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:55.887335   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:57.888082   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:00.387867   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:02.885883   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:04.887741   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:07.386962   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:09.887038   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:11.888284   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:14.386729   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:16.388752   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:18.889167   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:21.388569   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:23.389000   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:25.889318   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:28.387156   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:30.887591   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:32.888038   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:35.387189   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:37.388954   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:39.888736   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:42.387770   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:44.388231   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:46.388865   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:48.390054   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:50.887077   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:52.889796   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:55.387603   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:57.389156   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:59.390067   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:01.888280   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:03.890615   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:06.388810   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:08.395053   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:10.891022   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:13.387671   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:15.389123   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:17.389657   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:19.891053   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:22.390598   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:24.888414   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:26.889444   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:28.890985   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:31.389168   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:33.391212   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:35.888935   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:37.889955   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:40.387878   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:42.391117   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:44.887624   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:46.888329   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:48.892489   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:51.390289   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:51.884754   40900 pod_ready.go:81] duration metric: took 4m0.007433392s waiting for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" ...
	E0629 12:09:51.884779   40900 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 12:09:51.884801   40900 pod_ready.go:38] duration metric: took 4m10.657980757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:09:51.884847   40900 kubeadm.go:630] restartCluster took 4m21.569015743s
	W0629 12:09:51.884974   40900 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 12:09:51.885001   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0629 12:09:54.340631   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.455542748s)
	I0629 12:09:54.340693   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:09:54.350928   40900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:09:54.358196   40900 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 12:09:54.358240   40900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:09:54.365645   40900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 12:09:54.365669   40900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 12:09:54.644180   40900 out.go:204]   - Generating certificates and keys ...
	I0629 12:09:55.436699   40900 out.go:204]   - Booting up control plane ...
	I0629 12:10:02.007426   40900 out.go:204]   - Configuring RBAC rules ...
	I0629 12:10:02.381881   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:10:02.381896   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:10:02.381926   40900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:10:02.382004   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:02.382007   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=default-k8s-different-port-20220629120335-24356 minikube.k8s.io/updated_at=2022_06_29T12_10_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:02.398555   40900 ops.go:34] apiserver oom_adj: -16
	I0629 12:10:02.524549   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:03.081788   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:03.580947   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:04.082906   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:04.581016   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:05.080952   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:05.582778   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:06.082461   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:06.581135   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:07.081462   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:07.580952   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:08.083116   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:08.582944   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:09.081159   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:09.583028   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:10.081502   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:10.583083   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:11.082047   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:11.581902   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:12.080935   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:12.581027   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:13.081091   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:13.581484   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:14.081976   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:14.581567   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:15.081419   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:15.581169   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.081215   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.581098   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.636385   40900 kubeadm.go:1045] duration metric: took 14.25401703s to wait for elevateKubeSystemPrivileges.
	I0629 12:10:16.636403   40900 kubeadm.go:397] StartCluster complete in 4m46.355879997s
	I0629 12:10:16.636421   40900 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:10:16.636502   40900 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:10:16.637057   40900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:10:17.154534   40900 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220629120335-24356" rescaled to 1
	I0629 12:10:17.154581   40900 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:10:17.154592   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:10:17.154635   40900 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:10:17.179168   40900 out.go:177] * Verifying Kubernetes components...
	I0629 12:10:17.154816   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:10:17.179227   40900 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179238   40900 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179242   40900 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179244   40900 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.251996   40900 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.252003   40900 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220629120335-24356"
	W0629 12:10:17.252026   40900 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:10:17.252026   40900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.252032   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0629 12:10:17.252012   40900 addons.go:162] addon metrics-server should already be in state true
	I0629 12:10:17.252011   40900 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220629120335-24356"
	W0629 12:10:17.252073   40900 addons.go:162] addon dashboard should already be in state true
	I0629 12:10:17.252075   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252094   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252113   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252342   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.252474   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.253292   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.253467   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.264182   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 12:10:17.276210   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.405718   40900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:10:17.415915   40900 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.433419   40900 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220629120335-24356" to be "Ready" ...
	I0629 12:10:17.443055   40900 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:10:17.464058   40900 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	W0629 12:10:17.484802   40900 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:10:17.506219   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.484810   40900 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0629 12:10:17.484823   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:10:17.506850   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.520608   40900 node_ready.go:49] node "default-k8s-different-port-20220629120335-24356" has status "Ready":"True"
	I0629 12:10:17.527049   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:10:17.527075   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.563798   40900 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:10:17.563870   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:10:17.563872   40900 node_ready.go:38] duration metric: took 79.048397ms waiting for node "default-k8s-different-port-20220629120335-24356" to be "Ready" ...
	I0629 12:10:17.585134   40900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:10:17.585184   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:10:17.585206   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:10:17.585291   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.585296   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.593199   40900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:17.665696   40900 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:10:17.665711   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:10:17.665787   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.670001   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.696870   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.700767   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.759343   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.835815   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:10:17.835838   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:10:17.837925   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:10:17.837935   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:10:17.850995   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:10:17.922795   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:10:17.922813   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:10:17.933868   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:10:17.933891   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:10:17.949816   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:10:17.949837   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:10:18.025643   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:10:18.025663   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:10:18.040174   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:10:18.040187   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:10:18.053606   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:10:18.116478   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:10:18.137318   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:10:18.137344   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:10:18.240726   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:10:18.240742   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:10:18.319690   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:10:18.319710   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:10:18.344325   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:10:18.344337   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:10:18.358646   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:10:18.358658   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:10:18.373107   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:10:18.639163   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.374898461s)
	I0629 12:10:18.639189   40900 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 12:10:18.848043   40900 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:19.169072   40900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0629 12:10:19.227078   40900 addons.go:414] enableAddons completed in 2.072399823s
	I0629 12:10:19.630154   40900 pod_ready.go:102] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"False"
	I0629 12:10:22.129155   40900 pod_ready.go:102] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"False"
	I0629 12:10:22.628751   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.628765   40900 pod_ready.go:81] duration metric: took 5.035392246s waiting for pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.628773   40900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.633109   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.633116   40900 pod_ready.go:81] duration metric: took 4.337728ms waiting for pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.633122   40900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.637139   40900 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.637148   40900 pod_ready.go:81] duration metric: took 4.019768ms waiting for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.637154   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.641938   40900 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.641946   40900 pod_ready.go:81] duration metric: took 4.786805ms waiting for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.641954   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.646093   40900 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.646102   40900 pod_ready.go:81] duration metric: took 4.142515ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.646108   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42mtt" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.025736   40900 pod_ready.go:92] pod "kube-proxy-42mtt" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:23.025745   40900 pod_ready.go:81] duration metric: took 379.621193ms waiting for pod "kube-proxy-42mtt" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.025752   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.425527   40900 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:23.425537   40900 pod_ready.go:81] duration metric: took 399.769149ms waiting for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.425543   40900 pod_ready.go:38] duration metric: took 5.840170789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:10:23.425556   40900 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:10:23.425608   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:10:23.439147   40900 api_server.go:71] duration metric: took 6.284351507s to wait for apiserver process to appear ...
	I0629 12:10:23.439159   40900 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:10:23.439165   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:10:23.445058   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 200:
	ok
	I0629 12:10:23.446503   40900 api_server.go:140] control plane version: v1.24.2
	I0629 12:10:23.446513   40900 api_server.go:130] duration metric: took 7.350129ms to wait for apiserver health ...
	I0629 12:10:23.446519   40900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:10:23.632422   40900 system_pods.go:59] 9 kube-system pods found
	I0629 12:10:23.632439   40900 system_pods.go:61] "coredns-6d4b75cb6d-54rws" [60c259ab-57b4-463a-b089-fccaa6d3f6c0] Running
	I0629 12:10:23.632443   40900 system_pods.go:61] "coredns-6d4b75cb6d-vf8rl" [238d3a6f-05f7-4855-85b5-0d07b08f9074] Running
	I0629 12:10:23.632462   40900 system_pods.go:61] "etcd-default-k8s-different-port-20220629120335-24356" [2ed40fc5-8a2c-4005-88a8-162bf7f5db1f] Running
	I0629 12:10:23.632466   40900 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [9b870f1e-f6ca-4bef-91f3-9d2de9de0aec] Running
	I0629 12:10:23.632490   40900 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [8cf4752e-ce9b-4b30-8d53-5f06bac5f6a1] Running
	I0629 12:10:23.632493   40900 system_pods.go:61] "kube-proxy-42mtt" [322de8c5-d47e-4bb0-9d7d-ef640626c70c] Running
	I0629 12:10:23.632500   40900 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [c257d0fd-43d0-40eb-b9d1-0f1d4747a0ae] Running
	I0629 12:10:23.632505   40900 system_pods.go:61] "metrics-server-5c6f97fb75-smdz9" [2661f4fb-d410-4b0b-9abe-0c030e00d8b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:10:23.632511   40900 system_pods.go:61] "storage-provisioner" [bc59072d-a402-4441-ace1-1ade0e3b7e2f] Running
	I0629 12:10:23.632516   40900 system_pods.go:74] duration metric: took 185.971139ms to wait for pod list to return data ...
	I0629 12:10:23.632520   40900 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:10:23.825634   40900 default_sa.go:45] found service account: "default"
	I0629 12:10:23.825650   40900 default_sa.go:55] duration metric: took 193.118786ms for default service account to be created ...
	I0629 12:10:23.825658   40900 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 12:10:24.028758   40900 system_pods.go:86] 9 kube-system pods found
	I0629 12:10:24.028773   40900 system_pods.go:89] "coredns-6d4b75cb6d-54rws" [60c259ab-57b4-463a-b089-fccaa6d3f6c0] Running
	I0629 12:10:24.028778   40900 system_pods.go:89] "coredns-6d4b75cb6d-vf8rl" [238d3a6f-05f7-4855-85b5-0d07b08f9074] Running
	I0629 12:10:24.028781   40900 system_pods.go:89] "etcd-default-k8s-different-port-20220629120335-24356" [2ed40fc5-8a2c-4005-88a8-162bf7f5db1f] Running
	I0629 12:10:24.028785   40900 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [9b870f1e-f6ca-4bef-91f3-9d2de9de0aec] Running
	I0629 12:10:24.028789   40900 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [8cf4752e-ce9b-4b30-8d53-5f06bac5f6a1] Running
	I0629 12:10:24.028792   40900 system_pods.go:89] "kube-proxy-42mtt" [322de8c5-d47e-4bb0-9d7d-ef640626c70c] Running
	I0629 12:10:24.028795   40900 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [c257d0fd-43d0-40eb-b9d1-0f1d4747a0ae] Running
	I0629 12:10:24.028803   40900 system_pods.go:89] "metrics-server-5c6f97fb75-smdz9" [2661f4fb-d410-4b0b-9abe-0c030e00d8b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:10:24.028807   40900 system_pods.go:89] "storage-provisioner" [bc59072d-a402-4441-ace1-1ade0e3b7e2f] Running
	I0629 12:10:24.028813   40900 system_pods.go:126] duration metric: took 203.144154ms to wait for k8s-apps to be running ...
	I0629 12:10:24.028818   40900 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 12:10:24.028868   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:10:24.039499   40900 system_svc.go:56] duration metric: took 10.670289ms WaitForService to wait for kubelet.
	I0629 12:10:24.039512   40900 kubeadm.go:572] duration metric: took 6.88470241s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 12:10:24.039525   40900 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:10:24.226241   40900 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:10:24.226255   40900 node_conditions.go:123] node cpu capacity is 6
	I0629 12:10:24.226262   40900 node_conditions.go:105] duration metric: took 186.72858ms to run NodePressure ...
	I0629 12:10:24.226270   40900 start.go:213] waiting for startup goroutines ...
	I0629 12:10:24.261002   40900 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:10:24.304930   40900 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220629120335-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 19:05:26 UTC, end at Wed 2022-06-29 19:11:22 UTC. --
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.094712682Z" level=info msg="ignoring event" container=8a03830f39bc1f99a5fd84bf135868a57d14e6941a9f8288df718aa09ca6ef2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.158503939Z" level=info msg="ignoring event" container=5ff01dcef388380b112455ab1946805914406b1507f87e1fd3b47bbf12576c24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.231819270Z" level=info msg="ignoring event" container=a12283742e03db61e6dfa5e50c11e9dd633d2b93082dd91d50706be5a1455ed4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.373579186Z" level=info msg="ignoring event" container=a0ebda6bcd08c5e038c9888dfea2b96fb5fe699cadd715349cd643d40974d598 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.485113410Z" level=info msg="ignoring event" container=ca88a12972cb48457f99584e6dd1688b4a7d6fbbe6373263e43ed94b89aec5aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.555928590Z" level=info msg="ignoring event" container=8c44e501d4657fde0b5b07d750e11592435e5e08bb11cb1fc171b8665b53d115 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.624285983Z" level=info msg="ignoring event" container=b6a0faf878ae5dac02d056da146655b97663a9d77848397a1f8a713ea3b4f351 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.688587145Z" level=info msg="ignoring event" container=6cb9b52c9ae704e3f7cc50313d5fde0ad1d716f50c1257d8d68523d9b621d92c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.753957666Z" level=info msg="ignoring event" container=8a1a18181a86bac7fc8a5b80f1fdc0659bb67a6e98ed6367581c9f4e5bfe5a1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.819824354Z" level=info msg="ignoring event" container=548762a86045ed693871a7903fd3676a4f97db89fbede0efee88e4cf0d6c5787 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.944103273Z" level=info msg="ignoring event" container=c43e76c5abd3a30015a00f927bf18e6976ae57847559aa6dd1da9a0f25cf1be5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:19 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:19.495990954Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:19 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:19.496038908Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:19 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:19.497289506Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:21 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:21.525312419Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:10:22 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:22.217449217Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:10:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:24.089142446Z" level=info msg="ignoring event" container=72cad539fa623cabf025e3f29d013ae8018b7841aa71058e6355fc508e0e0d8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:24.278945740Z" level=info msg="ignoring event" container=7eabf4f18f63f348f05902172df676b6ea282816a0cf3ad861752180254584f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:26 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:26.096255422Z" level=info msg="ignoring event" container=03d0fb39e0996e71d265cc21913b948a8c98cdedb646bd1ba2ca87f34498cca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:26 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:26.165740504Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 19:10:26 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:26.362246151Z" level=info msg="ignoring event" container=090f498474a26c17cbf99583a4dd7ce6125ee0fc1539983ad3475c3c08085b05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:32 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:32.455766817Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:32 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:32.455789691Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:32 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:32.457139186Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:40 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:40.553733520Z" level=info msg="ignoring event" container=04e7386bad2372abddbca585ae7218086dd2f9460b7e3264509d1d6845fd2962 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	04e7386bad237       a90209bb39e3d                                                                                    42 seconds ago       Exited              dashboard-metrics-scraper   2                   6763c19f43ef7
	70de6e61337ed       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   51 seconds ago       Running             kubernetes-dashboard        0                   7b84e4c959d32
	9902a6f6a073a       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   596988a1fea3c
	2450635d2a98d       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   cf62692852b50
	e326c378c206a       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   b7d59273fe68f
	735198f9d479d       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   e6f67c8b51b50
	eda9cd41cb249       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   0b59256d3a69f
	e9f55dbf4dfdd       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   6ef79d3852256
	e9bc7a6b60cbb       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   78b948b5d031f
	
	* 
	* ==> coredns [2450635d2a98] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220629120335-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220629120335-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=default-k8s-different-port-20220629120335-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T12_10_02_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:09:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220629120335-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:09:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:09:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:09:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:10:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20220629120335-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                bc856e45-c15a-405f-9901-feecde9d5756
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-54rws                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     67s
	  kube-system                 etcd-default-k8s-different-port-20220629120335-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220629120335-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220629120335-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-42mtt                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220629120335-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 metrics-server-5c6f97fb75-smdz9                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-tcmv4                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-q9lqr                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 66s   kube-proxy       
	  Normal  Starting                 81s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  81s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  81s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientPID
	  Normal  NodeReady                71s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeReady
	  Normal  RegisteredNode           68s   node-controller  Node default-k8s-different-port-20220629120335-24356 event: Registered Node default-k8s-different-port-20220629120335-24356 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [e9bc7a6b60cb] <==
	* {"level":"info","ts":"2022-06-29T19:09:56.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T19:09:56.848Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20220629120335-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.646Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T19:09:57.646Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:09:57.651Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:09:57.651Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:11:23 up  1:19,  0 users,  load average: 1.56, 1.06, 1.20
	Linux default-k8s-different-port-20220629120335-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [eda9cd41cb24] <==
	* I0629 19:10:02.198859       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 19:10:02.205721       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0629 19:10:02.213348       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 19:10:02.280558       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 19:10:15.994943       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:10:16.643414       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0629 19:10:17.291683       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 19:10:18.857767       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.109.122.127]
	E0629 19:10:18.936113       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0629 19:10:19.135852       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.8.165]
	I0629 19:10:19.152536       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.103.141.115]
	W0629 19:10:19.751832       1 handler_proxy.go:102] no RequestInfo found in the context
	W0629 19:10:19.751849       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:10:19.751885       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:10:19.751906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0629 19:10:19.751930       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:10:19.753234       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:11:19.997970       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:11:19.998007       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:11:19.998013       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:11:19.998544       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:11:19.998561       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:11:19.999017       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [735198f9d479] <==
	* I0629 19:10:18.741383       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 19:10:18.743987       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0629 19:10:18.749927       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0629 19:10:18.762280       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-smdz9"
	I0629 19:10:18.852586       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 19:10:18.858390       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:10:18.864027       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0629 19:10:18.864140       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:10:18.869008       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.869461       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:10:18.869507       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:10:18.872913       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.933310       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:10:18.933641       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:10:18.935238       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.935427       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:10:18.940421       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.940472       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:10:19.047900       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-tcmv4"
	I0629 19:10:19.048256       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-q9lqr"
	W0629 19:10:25.247177       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0629 19:10:45.942912       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:10:46.353460       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 19:11:20.216646       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:11:20.223631       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e326c378c206] <==
	* I0629 19:10:17.192603       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:10:17.192648       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:10:17.192667       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:10:17.287754       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:10:17.287861       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:10:17.287877       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:10:17.287894       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:10:17.287924       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:10:17.288210       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:10:17.288434       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:10:17.288452       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:10:17.289193       1 config.go:444] "Starting node config controller"
	I0629 19:10:17.289230       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:10:17.289194       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:10:17.289629       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:10:17.289204       1 config.go:317] "Starting service config controller"
	I0629 19:10:17.289648       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:10:17.389386       1 shared_informer.go:262] Caches are synced for node config
	I0629 19:10:17.431143       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:10:17.431193       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [e9f55dbf4dfd] <==
	* W0629 19:09:59.444155       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:09:59.444165       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:09:59.444336       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 19:09:59.444367       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 19:09:59.445169       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:09:59.445238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:10:00.294922       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:10:00.294969       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:10:00.317355       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:10:00.317390       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:10:00.373382       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 19:10:00.373451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 19:10:00.381178       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0629 19:10:00.381194       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0629 19:10:00.395903       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 19:10:00.395939       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 19:10:00.396026       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 19:10:00.396056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 19:10:00.441890       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0629 19:10:00.441927       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0629 19:10:00.487132       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 19:10:00.487168       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 19:10:00.591847       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0629 19:10:00.591883       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0629 19:10:01.136652       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 19:05:26 UTC, end at Wed 2022-06-29 19:11:24 UTC. --
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698470    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2661f4fb-d410-4b0b-9abe-0c030e00d8b3-tmp-dir\") pod \"metrics-server-5c6f97fb75-smdz9\" (UID: \"2661f4fb-d410-4b0b-9abe-0c030e00d8b3\") " pod="kube-system/metrics-server-5c6f97fb75-smdz9"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698563    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60c259ab-57b4-463a-b089-fccaa6d3f6c0-config-volume\") pod \"coredns-6d4b75cb6d-54rws\" (UID: \"60c259ab-57b4-463a-b089-fccaa6d3f6c0\") " pod="kube-system/coredns-6d4b75cb6d-54rws"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698612    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdf4\" (UniqueName: \"kubernetes.io/projected/513c4ddc-31bf-4472-b555-4f007825f07f-kube-api-access-hgdf4\") pod \"kubernetes-dashboard-5fd5574d9f-q9lqr\" (UID: \"513c4ddc-31bf-4472-b555-4f007825f07f\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-q9lqr"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698656    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qclx5\" (UniqueName: \"kubernetes.io/projected/f4468363-29b5-4d36-beef-5610f1e1625c-kube-api-access-qclx5\") pod \"dashboard-metrics-scraper-dffd48c4c-tcmv4\" (UID: \"f4468363-29b5-4d36-beef-5610f1e1625c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tcmv4"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698713    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/322de8c5-d47e-4bb0-9d7d-ef640626c70c-kube-proxy\") pod \"kube-proxy-42mtt\" (UID: \"322de8c5-d47e-4bb0-9d7d-ef640626c70c\") " pod="kube-system/kube-proxy-42mtt"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698797    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfrp\" (UniqueName: \"kubernetes.io/projected/60c259ab-57b4-463a-b089-fccaa6d3f6c0-kube-api-access-jxfrp\") pod \"coredns-6d4b75cb6d-54rws\" (UID: \"60c259ab-57b4-463a-b089-fccaa6d3f6c0\") " pod="kube-system/coredns-6d4b75cb6d-54rws"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698837    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46s2\" (UniqueName: \"kubernetes.io/projected/322de8c5-d47e-4bb0-9d7d-ef640626c70c-kube-api-access-s46s2\") pod \"kube-proxy-42mtt\" (UID: \"322de8c5-d47e-4bb0-9d7d-ef640626c70c\") " pod="kube-system/kube-proxy-42mtt"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698861    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4468363-29b5-4d36-beef-5610f1e1625c-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-tcmv4\" (UID: \"f4468363-29b5-4d36-beef-5610f1e1625c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tcmv4"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698952    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bc59072d-a402-4441-ace1-1ade0e3b7e2f-tmp\") pod \"storage-provisioner\" (UID: \"bc59072d-a402-4441-ace1-1ade0e3b7e2f\") " pod="kube-system/storage-provisioner"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699086    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lxr\" (UniqueName: \"kubernetes.io/projected/2661f4fb-d410-4b0b-9abe-0c030e00d8b3-kube-api-access-d6lxr\") pod \"metrics-server-5c6f97fb75-smdz9\" (UID: \"2661f4fb-d410-4b0b-9abe-0c030e00d8b3\") " pod="kube-system/metrics-server-5c6f97fb75-smdz9"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699129    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578j8\" (UniqueName: \"kubernetes.io/projected/bc59072d-a402-4441-ace1-1ade0e3b7e2f-kube-api-access-578j8\") pod \"storage-provisioner\" (UID: \"bc59072d-a402-4441-ace1-1ade0e3b7e2f\") " pod="kube-system/storage-provisioner"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699147    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/513c4ddc-31bf-4472-b555-4f007825f07f-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-q9lqr\" (UID: \"513c4ddc-31bf-4472-b555-4f007825f07f\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-q9lqr"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699178    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/322de8c5-d47e-4bb0-9d7d-ef640626c70c-xtables-lock\") pod \"kube-proxy-42mtt\" (UID: \"322de8c5-d47e-4bb0-9d7d-ef640626c70c\") " pod="kube-system/kube-proxy-42mtt"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699228    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/322de8c5-d47e-4bb0-9d7d-ef640626c70c-lib-modules\") pod \"kube-proxy-42mtt\" (UID: \"322de8c5-d47e-4bb0-9d7d-ef640626c70c\") " pod="kube-system/kube-proxy-42mtt"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699277    9936 reconciler.go:157] "Reconciler: start to sync state"
	Jun 29 19:11:22 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:22.840574    9936 request.go:601] Waited for 1.128455817s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 29 19:11:22 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:22.921066    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:23 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:23.091691    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:23 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:23.244578    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:23 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:23.503695    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112050    9936 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112110    9936 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112229    9936 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d6lxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-smdz9_kube-system(2661f4fb-d410-4b0b-9abe-0c030e00d8b3): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112257    9936 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-smdz9" podUID=2661f4fb-d410-4b0b-9abe-0c030e00d8b3
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:24.343685    9936 scope.go:110] "RemoveContainer" containerID="04e7386bad2372abddbca585ae7218086dd2f9460b7e3264509d1d6845fd2962"
	
	* 
	* ==> kubernetes-dashboard [70de6e61337e] <==
	* 2022/06/29 19:10:32 Using namespace: kubernetes-dashboard
	2022/06/29 19:10:32 Using in-cluster config to connect to apiserver
	2022/06/29 19:10:32 Using secret token for csrf signing
	2022/06/29 19:10:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 19:10:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 19:10:32 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 19:10:32 Generating JWE encryption key
	2022/06/29 19:10:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 19:10:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 19:10:32 Initializing JWE encryption key from synchronized object
	2022/06/29 19:10:32 Creating in-cluster Sidecar client
	2022/06/29 19:10:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:10:32 Serving insecurely on HTTP port: 9090
	2022/06/29 19:11:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:10:32 Starting overwatch
	
	* 
	* ==> storage-provisioner [9902a6f6a073] <==
	* I0629 19:10:19.339446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:10:19.348022       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:10:19.348072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:10:19.354680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:10:19.354977       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220629120335-24356_958d464d-0577-4625-be7c-ed7ea2c028c3!
	I0629 19:10:19.355688       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46e31cb1-36ec-437b-bd54-43b2929c0a6b", APIVersion:"v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220629120335-24356_958d464d-0577-4625-be7c-ed7ea2c028c3 became leader
	I0629 19:10:19.455188       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220629120335-24356_958d464d-0577-4625-be7c-ed7ea2c028c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-smdz9
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 describe pod metrics-server-5c6f97fb75-smdz9
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220629120335-24356 describe pod metrics-server-5c6f97fb75-smdz9: exit status 1 (273.88118ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-smdz9" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220629120335-24356 describe pod metrics-server-5c6f97fb75-smdz9: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220629120335-24356
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220629120335-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63",
	        "Created": "2022-06-29T19:03:42.606358049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 292337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T19:05:25.980383973Z",
	            "FinishedAt": "2022-06-29T19:05:24.073952813Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/hosts",
	        "LogPath": "/var/lib/docker/containers/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63/1ed0e6ce6fe40ff3f606be0e7c2524dff305d54eefdc9f4120036f1a6d20dc63-json.log",
	        "Name": "/default-k8s-different-port-20220629120335-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220629120335-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220629120335-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b596c73a48476a8ee5734837ba3392b200f02816d2269c05dd34fc9415920f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220629120335-24356",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220629120335-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220629120335-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220629120335-24356",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220629120335-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "266ce96f2e686f18200d6d605b579b4dbedf7dd94d5b65d64af1ee9a8b4fe204",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61600"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61601"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61602"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61603"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61604"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/266ce96f2e68",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220629120335-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1ed0e6ce6fe4",
	                        "default-k8s-different-port-20220629120335-24356"
	                    ],
	                    "NetworkID": "0387efa2aeb00cda0190330b61b4511178405a5af8b14254981312d43b80643e",
	                    "EndpointID": "559b6bd4b7de6d7b58462db817b7abecc9850e5812d3aeb14922334ee3b314d9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220629120335-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220629120335-24356 logs -n 25: (2.752668459s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:53 PDT |                     |
	|         | old-k8s-version-20220629114717-24356              |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |          |         |         |                     |                     |
	|         | --disable-driver-mounts                           |          |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |          |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:55 PDT | 29 Jun 22 11:55 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | no-preload-20220629114832-24356                   |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:56 PDT | 29 Jun 22 11:56 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 11:57 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |          |         |         |                     |                     |
	|         | --driver=docker                                   |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                  |          |         |         |                     |                     |
	| delete  | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | disable-driver-mounts-20220629120335-24356        |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:04 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |          |         |         |                     |                     |
	| stop    | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |          |         |         |                     |                     |
	| addons  | enable dashboard -p                               | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |          |         |         |                     |                     |
	| start   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                      |          |         |         |                     |                     |
	| ssh     | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | sudo crictl images -o json                        |          |         |         |                     |                     |
	| pause   | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	| unpause | -p                                                | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356   |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |          |         |         |                     |                     |
	|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 12:05:24
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 12:05:24.742130   40900 out.go:296] Setting OutFile to fd 1 ...
	I0629 12:05:24.742284   40900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:05:24.742289   40900 out.go:309] Setting ErrFile to fd 2...
	I0629 12:05:24.742293   40900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:05:24.742591   40900 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 12:05:24.742844   40900 out.go:303] Setting JSON to false
	I0629 12:05:24.757723   40900 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11092,"bootTime":1656518432,"procs":372,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 12:05:24.757833   40900 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 12:05:24.779949   40900 out.go:177] * [default-k8s-different-port-20220629120335-24356] minikube v1.26.0 on Darwin 12.4
	I0629 12:05:24.822677   40900 notify.go:193] Checking for updates...
	I0629 12:05:24.843727   40900 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 12:05:24.864447   40900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:05:24.885678   40900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 12:05:24.907000   40900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 12:05:24.928764   40900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 12:05:24.950479   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:05:24.950992   40900 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 12:05:25.019818   40900 docker.go:137] docker version: linux-20.10.16
	I0629 12:05:25.019950   40900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:05:25.141831   40900 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:05:25.07732428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:05:25.163888   40900 out.go:177] * Using the docker driver based on existing profile
	I0629 12:05:25.185202   40900 start.go:284] selected driver: docker
	I0629 12:05:25.185226   40900 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port
-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:25.185357   40900 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 12:05:25.188563   40900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:05:25.310870   40900 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:05:25.24659859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:05:25.311015   40900 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0629 12:05:25.311029   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:25.311037   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:25.311045   40900 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:25.354945   40900 out.go:177] * Starting control plane node default-k8s-different-port-20220629120335-24356 in cluster default-k8s-different-port-20220629120335-24356
	I0629 12:05:25.376387   40900 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 12:05:25.397604   40900 out.go:177] * Pulling base image ...
	I0629 12:05:25.439278   40900 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:05:25.439289   40900 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 12:05:25.439326   40900 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 12:05:25.439338   40900 cache.go:57] Caching tarball of preloaded images
	I0629 12:05:25.439430   40900 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 12:05:25.439443   40900 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 12:05:25.440039   40900 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/config.json ...
	I0629 12:05:25.502774   40900 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 12:05:25.502801   40900 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 12:05:25.502814   40900 cache.go:208] Successfully downloaded all kic artifacts
	I0629 12:05:25.502860   40900 start.go:352] acquiring machines lock for default-k8s-different-port-20220629120335-24356: {Name:mk60bb2ebdcfb729d9b918baeac3e57ffdf371c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 12:05:25.502941   40900 start.go:356] acquired machines lock for "default-k8s-different-port-20220629120335-24356" in 63.513µs
	I0629 12:05:25.502981   40900 start.go:94] Skipping create...Using existing machine configuration
	I0629 12:05:25.502990   40900 fix.go:55] fixHost starting: 
	I0629 12:05:25.503259   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:05:25.570445   40900 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220629120335-24356: state=Stopped err=<nil>
	W0629 12:05:25.570489   40900 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 12:05:25.612862   40900 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220629120335-24356" ...
	I0629 12:05:25.633949   40900 cli_runner.go:164] Run: docker start default-k8s-different-port-20220629120335-24356
	I0629 12:05:25.987798   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:05:26.061121   40900 kic.go:416] container "default-k8s-different-port-20220629120335-24356" state is running.
	I0629 12:05:26.061836   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.139968   40900 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/config.json ...
	I0629 12:05:26.140415   40900 machine.go:88] provisioning docker machine ...
	I0629 12:05:26.140442   40900 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220629120335-24356"
	I0629 12:05:26.140525   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.214964   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:26.215172   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:26.215190   40900 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220629120335-24356 && echo "default-k8s-different-port-20220629120335-24356" | sudo tee /etc/hostname
	I0629 12:05:26.348464   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220629120335-24356
	
	I0629 12:05:26.348558   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.425518   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:26.425668   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:26.425687   40900 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220629120335-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220629120335-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220629120335-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 12:05:26.545918   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:05:26.545942   40900 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 12:05:26.545963   40900 ubuntu.go:177] setting up certificates
	I0629 12:05:26.545973   40900 provision.go:83] configureAuth start
	I0629 12:05:26.546049   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.619306   40900 provision.go:138] copyHostCerts
	I0629 12:05:26.619394   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 12:05:26.619403   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 12:05:26.619490   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 12:05:26.619715   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 12:05:26.619724   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 12:05:26.619781   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 12:05:26.619936   40900 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 12:05:26.619942   40900 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 12:05:26.620000   40900 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 12:05:26.620120   40900 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220629120335-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220629120335-24356]
	I0629 12:05:26.875537   40900 provision.go:172] copyRemoteCerts
	I0629 12:05:26.875603   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 12:05:26.875648   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:26.946535   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:27.033514   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 12:05:27.051758   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0629 12:05:27.069055   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0629 12:05:27.086527   40900 provision.go:86] duration metric: configureAuth took 540.524483ms
	I0629 12:05:27.086541   40900 ubuntu.go:193] setting minikube options for container-runtime
	I0629 12:05:27.086686   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:05:27.086764   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.159960   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.160131   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.160142   40900 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 12:05:27.278802   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 12:05:27.278816   40900 ubuntu.go:71] root file system type: overlay
	I0629 12:05:27.278968   40900 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 12:05:27.279043   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.349746   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.349897   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.349945   40900 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 12:05:27.475893   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 12:05:27.475971   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.546989   40900 main.go:134] libmachine: Using SSH client type: native
	I0629 12:05:27.547153   40900 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 61600 <nil> <nil>}
	I0629 12:05:27.547166   40900 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 12:05:27.669428   40900 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:05:27.669447   40900 machine.go:91] provisioned docker machine in 1.528975004s
	I0629 12:05:27.669457   40900 start.go:306] post-start starting for "default-k8s-different-port-20220629120335-24356" (driver="docker")
	I0629 12:05:27.669462   40900 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 12:05:27.669535   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 12:05:27.669581   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.740351   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:27.824385   40900 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 12:05:27.827915   40900 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 12:05:27.827935   40900 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 12:05:27.827942   40900 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 12:05:27.827947   40900 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 12:05:27.827955   40900 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 12:05:27.828087   40900 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 12:05:27.828236   40900 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 12:05:27.828402   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 12:05:27.835575   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:05:27.854776   40900 start.go:309] post-start completed in 185.304144ms
	I0629 12:05:27.854863   40900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 12:05:27.854912   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:27.926994   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.012302   40900 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 12:05:28.016583   40900 fix.go:57] fixHost completed within 2.513517141s
	I0629 12:05:28.016593   40900 start.go:81] releasing machines lock for "default-k8s-different-port-20220629120335-24356", held for 2.513569784s
	I0629 12:05:28.016680   40900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.088364   40900 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 12:05:28.088365   40900 ssh_runner.go:195] Run: systemctl --version
	I0629 12:05:28.088430   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.088437   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:28.164662   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.166354   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:05:28.248710   40900 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 12:05:28.728545   40900 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 12:05:28.728612   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 12:05:28.740680   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 12:05:28.753053   40900 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 12:05:28.822506   40900 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 12:05:28.886100   40900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:05:28.947818   40900 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 12:05:29.176842   40900 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 12:05:29.240921   40900 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:05:29.307948   40900 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 12:05:29.317549   40900 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 12:05:29.317619   40900 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 12:05:29.321834   40900 start.go:468] Will wait 60s for crictl version
	I0629 12:05:29.321886   40900 ssh_runner.go:195] Run: sudo crictl version
	I0629 12:05:29.435634   40900 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 12:05:29.435699   40900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:05:29.470251   40900 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:05:29.547597   40900 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 12:05:29.547772   40900 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220629120335-24356 dig +short host.docker.internal
	I0629 12:05:29.681289   40900 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 12:05:29.681400   40900 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 12:05:29.685664   40900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:05:29.695599   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:29.781285   40900 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:05:29.781347   40900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:05:29.812942   40900 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 12:05:29.812959   40900 docker.go:533] Images already preloaded, skipping extraction
	I0629 12:05:29.813043   40900 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:05:29.844705   40900 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0629 12:05:29.844730   40900 cache_images.go:84] Images are preloaded, skipping loading
	I0629 12:05:29.844805   40900 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 12:05:29.916958   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:29.916970   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:29.916983   40900 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0629 12:05:29.916996   40900 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220629120335-24356 NodeName:default-k8s-different-port-20220629120335-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 12:05:29.917102   40900 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220629120335-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 12:05:29.917190   40900 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220629120335-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0629 12:05:29.917247   40900 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 12:05:29.924780   40900 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 12:05:29.924831   40900 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 12:05:29.932000   40900 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0629 12:05:29.944399   40900 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 12:05:29.956598   40900 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0629 12:05:29.968949   40900 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 12:05:29.972554   40900 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:05:29.981744   40900 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356 for IP: 192.168.67.2
	I0629 12:05:29.981862   40900 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 12:05:29.981909   40900 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 12:05:29.981988   40900 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.key
	I0629 12:05:29.982046   40900 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.key.c7fa3a9e
	I0629 12:05:29.982104   40900 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.key
	I0629 12:05:29.982298   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 12:05:29.982336   40900 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 12:05:29.982348   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 12:05:29.982396   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 12:05:29.982427   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 12:05:29.982457   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 12:05:29.982526   40900 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:05:29.983077   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 12:05:29.999906   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 12:05:30.016302   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 12:05:30.032829   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 12:05:30.049113   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 12:05:30.066680   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 12:05:30.085650   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 12:05:30.104770   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 12:05:30.122336   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 12:05:30.139889   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 12:05:30.156772   40900 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 12:05:30.173073   40900 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 12:05:30.185217   40900 ssh_runner.go:195] Run: openssl version
	I0629 12:05:30.190479   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 12:05:30.198324   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.202106   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.202144   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:05:30.207062   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 12:05:30.214124   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 12:05:30.221651   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.225365   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.225410   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 12:05:30.230811   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 12:05:30.238146   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 12:05:30.245876   40900 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.249833   40900 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.249872   40900 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 12:05:30.261528   40900 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 12:05:30.271938   40900 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220629120335-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:default-k8s-different-port-20220629120335-2435
6 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:05:30.272050   40900 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:05:30.300455   40900 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 12:05:30.307957   40900 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 12:05:30.307974   40900 kubeadm.go:626] restartCluster start
	I0629 12:05:30.308019   40900 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 12:05:30.315073   40900 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.315136   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:05:30.387728   40900 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220629120335-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:05:30.387917   40900 kubeconfig.go:127] "default-k8s-different-port-20220629120335-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 12:05:30.388246   40900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:05:30.389575   40900 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 12:05:30.397283   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.397330   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.405451   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.607595   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.607799   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.618280   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:30.805713   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:30.805781   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:30.814896   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.007650   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.007853   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.018553   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.205587   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.205734   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.216423   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.407635   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.407906   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.418511   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.605584   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.605644   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.615628   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:31.806285   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:31.806423   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:31.817288   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.005635   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.005834   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.016849   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.206682   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.206849   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.218451   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.405896   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.406007   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.416979   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.606317   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.606498   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.616827   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:32.805660   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:32.805734   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:32.815566   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.007709   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.007876   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.019040   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.206756   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.206924   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.218107   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.407701   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.407880   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.418775   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.418786   40900 api_server.go:165] Checking apiserver status ...
	I0629 12:05:33.418833   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:05:33.426759   40900 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.426770   40900 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 12:05:33.426779   40900 kubeadm.go:1092] stopping kube-system containers ...
	I0629 12:05:33.426834   40900 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:05:33.458274   40900 docker.go:434] Stopping containers: [17ccfd6d87bb f1818c465224 c1adcf1be18e cf519054c3a4 9f0b97ca9575 b425c6e78162 b2c6e14c7587 2a7a4e44fd96 d3440e6bd030 f677cfba52c7 9ba118edb0f3 55aed3b8ba56 2667b1e639dc 70e86622f020 855f6856c31f]
	I0629 12:05:33.458347   40900 ssh_runner.go:195] Run: docker stop 17ccfd6d87bb f1818c465224 c1adcf1be18e cf519054c3a4 9f0b97ca9575 b425c6e78162 b2c6e14c7587 2a7a4e44fd96 d3440e6bd030 f677cfba52c7 9ba118edb0f3 55aed3b8ba56 2667b1e639dc 70e86622f020 855f6856c31f
	I0629 12:05:33.489879   40900 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 12:05:33.500322   40900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:05:33.507933   40900 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 29 19:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun 29 19:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jun 29 19:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun 29 19:03 /etc/kubernetes/scheduler.conf
	
	I0629 12:05:33.507980   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0629 12:05:33.515037   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0629 12:05:33.522593   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0629 12:05:33.529626   40900 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.529674   40900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 12:05:33.536295   40900 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0629 12:05:33.543526   40900 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:05:33.543573   40900 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 12:05:33.550652   40900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:05:33.557856   40900 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 12:05:33.557869   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:33.603386   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.614038   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.010601206s)
	I0629 12:05:34.614052   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.784553   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.833543   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:34.911771   40900 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:05:34.911850   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.421616   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.921384   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:05:35.935811   40900 api_server.go:71] duration metric: took 1.024009063s to wait for apiserver process to appear ...
	I0629 12:05:35.935830   40900 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:05:35.935849   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:35.937118   40900 api_server.go:256] stopped: https://127.0.0.1:61604/healthz: Get "https://127.0.0.1:61604/healthz": EOF
	I0629 12:05:36.438094   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:39.455472   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 12:05:39.455492   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 12:05:39.937469   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:39.943847   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:05:39.943858   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:05:40.437422   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:40.444593   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:05:40.444607   40900 api_server.go:102] status: https://127.0.0.1:61604/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:05:40.937423   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:05:40.942951   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 200:
	ok
	I0629 12:05:40.949694   40900 api_server.go:140] control plane version: v1.24.2
	I0629 12:05:40.949709   40900 api_server.go:130] duration metric: took 5.0137233s to wait for apiserver health ...
	I0629 12:05:40.949717   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:05:40.949721   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:05:40.949730   40900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:05:40.956768   40900 system_pods.go:59] 8 kube-system pods found
	I0629 12:05:40.956784   40900 system_pods.go:61] "coredns-6d4b75cb6d-sr5rq" [6859dc98-d098-4a2f-b3e6-6e5b6225e930] Running
	I0629 12:05:40.956790   40900 system_pods.go:61] "etcd-default-k8s-different-port-20220629120335-24356" [4af024aa-48ac-40b0-b4c8-d05ab73ec465] Running
	I0629 12:05:40.956794   40900 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [bd9308ff-a917-4e0e-9d5c-8192ea128b2f] Running
	I0629 12:05:40.956807   40900 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [5d116566-36ba-4925-973b-c8622702e1e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 12:05:40.956811   40900 system_pods.go:61] "kube-proxy-c4lzs" [9bc1f0bb-d9c3-4809-a4b2-0f750021bad3] Running
	I0629 12:05:40.956834   40900 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [22bd5cf2-dd2c-4cb9-ad4b-8ea4c8d5772f] Running
	I0629 12:05:40.956839   40900 system_pods.go:61] "metrics-server-5c6f97fb75-rfjxz" [a1dcb333-c180-4b6b-8f3f-025a41f001b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:05:40.956843   40900 system_pods.go:61] "storage-provisioner" [5f591cc6-9b0f-4275-89e2-3096f390587d] Running
	I0629 12:05:40.956847   40900 system_pods.go:74] duration metric: took 7.112659ms to wait for pod list to return data ...
	I0629 12:05:40.956853   40900 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:05:40.959478   40900 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:05:40.959495   40900 node_conditions.go:123] node cpu capacity is 6
	I0629 12:05:40.959503   40900 node_conditions.go:105] duration metric: took 2.644447ms to run NodePressure ...
	I0629 12:05:40.959514   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:05:41.214716   40900 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0629 12:05:41.219273   40900 kubeadm.go:777] kubelet initialised
	I0629 12:05:41.219284   40900 kubeadm.go:778] duration metric: took 4.549914ms waiting for restarted kubelet to initialise ...
	I0629 12:05:41.219292   40900 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:05:41.225780   40900 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.231094   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.231106   40900 pod_ready.go:81] duration metric: took 5.312518ms waiting for pod "coredns-6d4b75cb6d-sr5rq" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.231116   40900 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.238011   40900 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.238021   40900 pod_ready.go:81] duration metric: took 6.900167ms waiting for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.238028   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.243816   40900 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:41.243825   40900 pod_ready.go:81] duration metric: took 5.792024ms waiting for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:41.243832   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:43.362002   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:45.858402   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:47.859472   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:49.862061   40900 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:51.859532   40900 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.859545   40900 pod_ready.go:81] duration metric: took 10.615389832s waiting for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.859553   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4lzs" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.864514   40900 pod_ready.go:92] pod "kube-proxy-c4lzs" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.864523   40900 pod_ready.go:81] duration metric: took 4.966121ms waiting for pod "kube-proxy-c4lzs" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.864529   40900 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.870041   40900 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:05:51.870052   40900 pod_ready.go:81] duration metric: took 5.516262ms waiting for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:51.870058   40900 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" ...
	I0629 12:05:53.883004   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:55.884160   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:05:58.383036   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:00.384561   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:02.882051   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:04.884520   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:07.383533   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:09.882797   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:11.882979   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:13.883312   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:15.883735   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:18.385501   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:20.883564   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:22.886763   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:25.383709   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:27.386276   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:29.885692   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:32.384164   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:34.883309   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:36.884800   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:39.384855   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:41.884577   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:44.384450   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:46.885968   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:49.384678   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:51.386004   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:53.886429   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:56.384509   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:06:58.386257   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:00.386604   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:02.885075   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:05.385265   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:07.386268   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:09.886384   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:12.385466   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:14.887248   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:17.385034   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:19.385266   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:21.886143   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:23.886397   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:25.887289   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:28.387746   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:30.890336   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:33.385686   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:35.387141   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:37.387612   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:39.885855   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:42.386043   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:44.387585   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:46.890258   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:49.388039   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:51.884165   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:53.885975   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:55.887335   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:07:57.888082   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:00.387867   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:02.885883   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:04.887741   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:07.386962   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:09.887038   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:11.888284   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:14.386729   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:16.388752   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:18.889167   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:21.388569   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:23.389000   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:25.889318   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:28.387156   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:30.887591   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:32.888038   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:35.387189   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:37.388954   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:39.888736   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:42.387770   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:44.388231   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:46.388865   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:48.390054   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:50.887077   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:52.889796   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:55.387603   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:57.389156   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:08:59.390067   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:01.888280   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:03.890615   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:06.388810   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:08.395053   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:10.891022   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:13.387671   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:15.389123   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:17.389657   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:19.891053   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:22.390598   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:24.888414   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:26.889444   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:28.890985   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:31.389168   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:33.391212   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:35.888935   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:37.889955   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:40.387878   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:42.391117   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:44.887624   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:46.888329   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:48.892489   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:51.390289   40900 pod_ready.go:102] pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace has status "Ready":"False"
	I0629 12:09:51.884754   40900 pod_ready.go:81] duration metric: took 4m0.007433392s waiting for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" ...
	E0629 12:09:51.884779   40900 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-rfjxz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0629 12:09:51.884801   40900 pod_ready.go:38] duration metric: took 4m10.657980757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:09:51.884847   40900 kubeadm.go:630] restartCluster took 4m21.569015743s
	W0629 12:09:51.884974   40900 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0629 12:09:51.885001   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0629 12:09:54.340631   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.455542748s)
	I0629 12:09:54.340693   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:09:54.350928   40900 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:09:54.358196   40900 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0629 12:09:54.358240   40900 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:09:54.365645   40900 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0629 12:09:54.365669   40900 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0629 12:09:54.644180   40900 out.go:204]   - Generating certificates and keys ...
	I0629 12:09:55.436699   40900 out.go:204]   - Booting up control plane ...
	I0629 12:10:02.007426   40900 out.go:204]   - Configuring RBAC rules ...
	I0629 12:10:02.381881   40900 cni.go:95] Creating CNI manager for ""
	I0629 12:10:02.381896   40900 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:10:02.381926   40900 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:10:02.382004   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:02.382007   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed minikube.k8s.io/name=default-k8s-different-port-20220629120335-24356 minikube.k8s.io/updated_at=2022_06_29T12_10_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:02.398555   40900 ops.go:34] apiserver oom_adj: -16
	I0629 12:10:02.524549   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:03.081788   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:03.580947   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:04.082906   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:04.581016   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:05.080952   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:05.582778   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:06.082461   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:06.581135   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:07.081462   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:07.580952   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:08.083116   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:08.582944   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:09.081159   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:09.583028   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:10.081502   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:10.583083   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:11.082047   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:11.581902   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:12.080935   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:12.581027   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:13.081091   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:13.581484   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:14.081976   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:14.581567   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:15.081419   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:15.581169   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.081215   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.581098   40900 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0629 12:10:16.636385   40900 kubeadm.go:1045] duration metric: took 14.25401703s to wait for elevateKubeSystemPrivileges.
	I0629 12:10:16.636403   40900 kubeadm.go:397] StartCluster complete in 4m46.355879997s
	I0629 12:10:16.636421   40900 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:10:16.636502   40900 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:10:16.637057   40900 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:10:17.154534   40900 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220629120335-24356" rescaled to 1
	I0629 12:10:17.154581   40900 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:10:17.154592   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:10:17.154635   40900 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:10:17.179168   40900 out.go:177] * Verifying Kubernetes components...
	I0629 12:10:17.154816   40900 config.go:178] Loaded profile config "default-k8s-different-port-20220629120335-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:10:17.179227   40900 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179238   40900 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179242   40900 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.179244   40900 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.251996   40900 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.252003   40900 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220629120335-24356"
	W0629 12:10:17.252026   40900 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:10:17.252026   40900 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.252032   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0629 12:10:17.252012   40900 addons.go:162] addon metrics-server should already be in state true
	I0629 12:10:17.252011   40900 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220629120335-24356"
	W0629 12:10:17.252073   40900 addons.go:162] addon dashboard should already be in state true
	I0629 12:10:17.252075   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252094   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252113   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.252342   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.252474   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.253292   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.253467   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.264182   40900 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0629 12:10:17.276210   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.405718   40900 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:10:17.415915   40900 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:17.433419   40900 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220629120335-24356" to be "Ready" ...
	I0629 12:10:17.443055   40900 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:10:17.464058   40900 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	W0629 12:10:17.484802   40900 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:10:17.506219   40900 host.go:66] Checking if "default-k8s-different-port-20220629120335-24356" exists ...
	I0629 12:10:17.484810   40900 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0629 12:10:17.484823   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:10:17.506850   40900 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220629120335-24356 --format={{.State.Status}}
	I0629 12:10:17.520608   40900 node_ready.go:49] node "default-k8s-different-port-20220629120335-24356" has status "Ready":"True"
	I0629 12:10:17.527049   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:10:17.527075   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.563798   40900 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:10:17.563870   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:10:17.563872   40900 node_ready.go:38] duration metric: took 79.048397ms waiting for node "default-k8s-different-port-20220629120335-24356" to be "Ready" ...
	I0629 12:10:17.585134   40900 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:10:17.585184   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:10:17.585206   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:10:17.585291   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.585296   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.593199   40900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:17.665696   40900 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:10:17.665711   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:10:17.665787   40900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220629120335-24356
	I0629 12:10:17.670001   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.696870   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.700767   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.759343   40900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61600 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/default-k8s-different-port-20220629120335-24356/id_rsa Username:docker}
	I0629 12:10:17.835815   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:10:17.835838   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:10:17.837925   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:10:17.837935   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:10:17.850995   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:10:17.922795   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:10:17.922813   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:10:17.933868   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:10:17.933891   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:10:17.949816   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:10:17.949837   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:10:18.025643   40900 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:10:18.025663   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:10:18.040174   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:10:18.040187   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:10:18.053606   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:10:18.116478   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:10:18.137318   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:10:18.137344   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:10:18.240726   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:10:18.240742   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:10:18.319690   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:10:18.319710   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:10:18.344325   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:10:18.344337   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:10:18.358646   40900 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:10:18.358658   40900 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:10:18.373107   40900 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:10:18.639163   40900 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.374898461s)
	I0629 12:10:18.639189   40900 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0629 12:10:18.848043   40900 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220629120335-24356"
	I0629 12:10:19.169072   40900 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0629 12:10:19.227078   40900 addons.go:414] enableAddons completed in 2.072399823s
	I0629 12:10:19.630154   40900 pod_ready.go:102] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"False"
	I0629 12:10:22.129155   40900 pod_ready.go:102] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"False"
	I0629 12:10:22.628751   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.628765   40900 pod_ready.go:81] duration metric: took 5.035392246s waiting for pod "coredns-6d4b75cb6d-54rws" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.628773   40900 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.633109   40900 pod_ready.go:92] pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.633116   40900 pod_ready.go:81] duration metric: took 4.337728ms waiting for pod "coredns-6d4b75cb6d-vf8rl" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.633122   40900 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.637139   40900 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.637148   40900 pod_ready.go:81] duration metric: took 4.019768ms waiting for pod "etcd-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.637154   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.641938   40900 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.641946   40900 pod_ready.go:81] duration metric: took 4.786805ms waiting for pod "kube-apiserver-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.641954   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.646093   40900 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:22.646102   40900 pod_ready.go:81] duration metric: took 4.142515ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:22.646108   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-42mtt" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.025736   40900 pod_ready.go:92] pod "kube-proxy-42mtt" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:23.025745   40900 pod_ready.go:81] duration metric: took 379.621193ms waiting for pod "kube-proxy-42mtt" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.025752   40900 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.425527   40900 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace has status "Ready":"True"
	I0629 12:10:23.425537   40900 pod_ready.go:81] duration metric: took 399.769149ms waiting for pod "kube-scheduler-default-k8s-different-port-20220629120335-24356" in "kube-system" namespace to be "Ready" ...
	I0629 12:10:23.425543   40900 pod_ready.go:38] duration metric: took 5.840170789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0629 12:10:23.425556   40900 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:10:23.425608   40900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:10:23.439147   40900 api_server.go:71] duration metric: took 6.284351507s to wait for apiserver process to appear ...
	I0629 12:10:23.439159   40900 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:10:23.439165   40900 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61604/healthz ...
	I0629 12:10:23.445058   40900 api_server.go:266] https://127.0.0.1:61604/healthz returned 200:
	ok
	I0629 12:10:23.446503   40900 api_server.go:140] control plane version: v1.24.2
	I0629 12:10:23.446513   40900 api_server.go:130] duration metric: took 7.350129ms to wait for apiserver health ...
	I0629 12:10:23.446519   40900 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:10:23.632422   40900 system_pods.go:59] 9 kube-system pods found
	I0629 12:10:23.632439   40900 system_pods.go:61] "coredns-6d4b75cb6d-54rws" [60c259ab-57b4-463a-b089-fccaa6d3f6c0] Running
	I0629 12:10:23.632443   40900 system_pods.go:61] "coredns-6d4b75cb6d-vf8rl" [238d3a6f-05f7-4855-85b5-0d07b08f9074] Running
	I0629 12:10:23.632462   40900 system_pods.go:61] "etcd-default-k8s-different-port-20220629120335-24356" [2ed40fc5-8a2c-4005-88a8-162bf7f5db1f] Running
	I0629 12:10:23.632466   40900 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [9b870f1e-f6ca-4bef-91f3-9d2de9de0aec] Running
	I0629 12:10:23.632490   40900 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [8cf4752e-ce9b-4b30-8d53-5f06bac5f6a1] Running
	I0629 12:10:23.632493   40900 system_pods.go:61] "kube-proxy-42mtt" [322de8c5-d47e-4bb0-9d7d-ef640626c70c] Running
	I0629 12:10:23.632500   40900 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [c257d0fd-43d0-40eb-b9d1-0f1d4747a0ae] Running
	I0629 12:10:23.632505   40900 system_pods.go:61] "metrics-server-5c6f97fb75-smdz9" [2661f4fb-d410-4b0b-9abe-0c030e00d8b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:10:23.632511   40900 system_pods.go:61] "storage-provisioner" [bc59072d-a402-4441-ace1-1ade0e3b7e2f] Running
	I0629 12:10:23.632516   40900 system_pods.go:74] duration metric: took 185.971139ms to wait for pod list to return data ...
	I0629 12:10:23.632520   40900 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:10:23.825634   40900 default_sa.go:45] found service account: "default"
	I0629 12:10:23.825650   40900 default_sa.go:55] duration metric: took 193.118786ms for default service account to be created ...
	I0629 12:10:23.825658   40900 system_pods.go:116] waiting for k8s-apps to be running ...
	I0629 12:10:24.028758   40900 system_pods.go:86] 9 kube-system pods found
	I0629 12:10:24.028773   40900 system_pods.go:89] "coredns-6d4b75cb6d-54rws" [60c259ab-57b4-463a-b089-fccaa6d3f6c0] Running
	I0629 12:10:24.028778   40900 system_pods.go:89] "coredns-6d4b75cb6d-vf8rl" [238d3a6f-05f7-4855-85b5-0d07b08f9074] Running
	I0629 12:10:24.028781   40900 system_pods.go:89] "etcd-default-k8s-different-port-20220629120335-24356" [2ed40fc5-8a2c-4005-88a8-162bf7f5db1f] Running
	I0629 12:10:24.028785   40900 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220629120335-24356" [9b870f1e-f6ca-4bef-91f3-9d2de9de0aec] Running
	I0629 12:10:24.028789   40900 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220629120335-24356" [8cf4752e-ce9b-4b30-8d53-5f06bac5f6a1] Running
	I0629 12:10:24.028792   40900 system_pods.go:89] "kube-proxy-42mtt" [322de8c5-d47e-4bb0-9d7d-ef640626c70c] Running
	I0629 12:10:24.028795   40900 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220629120335-24356" [c257d0fd-43d0-40eb-b9d1-0f1d4747a0ae] Running
	I0629 12:10:24.028803   40900 system_pods.go:89] "metrics-server-5c6f97fb75-smdz9" [2661f4fb-d410-4b0b-9abe-0c030e00d8b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:10:24.028807   40900 system_pods.go:89] "storage-provisioner" [bc59072d-a402-4441-ace1-1ade0e3b7e2f] Running
	I0629 12:10:24.028813   40900 system_pods.go:126] duration metric: took 203.144154ms to wait for k8s-apps to be running ...
	I0629 12:10:24.028818   40900 system_svc.go:44] waiting for kubelet service to be running ....
	I0629 12:10:24.028868   40900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:10:24.039499   40900 system_svc.go:56] duration metric: took 10.670289ms WaitForService to wait for kubelet.
	I0629 12:10:24.039512   40900 kubeadm.go:572] duration metric: took 6.88470241s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0629 12:10:24.039525   40900 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:10:24.226241   40900 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:10:24.226255   40900 node_conditions.go:123] node cpu capacity is 6
	I0629 12:10:24.226262   40900 node_conditions.go:105] duration metric: took 186.72858ms to run NodePressure ...
	I0629 12:10:24.226270   40900 start.go:213] waiting for startup goroutines ...
	I0629 12:10:24.261002   40900 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:10:24.304930   40900 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220629120335-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 19:05:26 UTC, end at Wed 2022-06-29 19:11:27 UTC. --
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.485113410Z" level=info msg="ignoring event" container=ca88a12972cb48457f99584e6dd1688b4a7d6fbbe6373263e43ed94b89aec5aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.555928590Z" level=info msg="ignoring event" container=8c44e501d4657fde0b5b07d750e11592435e5e08bb11cb1fc171b8665b53d115 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.624285983Z" level=info msg="ignoring event" container=b6a0faf878ae5dac02d056da146655b97663a9d77848397a1f8a713ea3b4f351 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.688587145Z" level=info msg="ignoring event" container=6cb9b52c9ae704e3f7cc50313d5fde0ad1d716f50c1257d8d68523d9b621d92c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.753957666Z" level=info msg="ignoring event" container=8a1a18181a86bac7fc8a5b80f1fdc0659bb67a6e98ed6367581c9f4e5bfe5a1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.819824354Z" level=info msg="ignoring event" container=548762a86045ed693871a7903fd3676a4f97db89fbede0efee88e4cf0d6c5787 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:09:53 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:09:53.944103273Z" level=info msg="ignoring event" container=c43e76c5abd3a30015a00f927bf18e6976ae57847559aa6dd1da9a0f25cf1be5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:19 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:19.495990954Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:19 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:19.496038908Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:19 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:19.497289506Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:21 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:21.525312419Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:10:22 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:22.217449217Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 29 19:10:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:24.089142446Z" level=info msg="ignoring event" container=72cad539fa623cabf025e3f29d013ae8018b7841aa71058e6355fc508e0e0d8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:24.278945740Z" level=info msg="ignoring event" container=7eabf4f18f63f348f05902172df676b6ea282816a0cf3ad861752180254584f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:26 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:26.096255422Z" level=info msg="ignoring event" container=03d0fb39e0996e71d265cc21913b948a8c98cdedb646bd1ba2ca87f34498cca4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:26 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:26.165740504Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 29 19:10:26 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:26.362246151Z" level=info msg="ignoring event" container=090f498474a26c17cbf99583a4dd7ce6125ee0fc1539983ad3475c3c08085b05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:10:32 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:32.455766817Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:32 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:32.455789691Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:32 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:32.457139186Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:10:40 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:10:40.553733520Z" level=info msg="ignoring event" container=04e7386bad2372abddbca585ae7218086dd2f9460b7e3264509d1d6845fd2962 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:11:24.108878772Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:11:24.108910381Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:11:24.111512033Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 dockerd[513]: time="2022-06-29T19:11:24.794964931Z" level=info msg="ignoring event" container=84921e9ab3774b4c024aeba5875cc6bf0ab247d92e70290b5543ce1242e7f06e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	84921e9ab3774       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   3                   6763c19f43ef7
	70de6e61337ed       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   56 seconds ago       Running             kubernetes-dashboard        0                   7b84e4c959d32
	9902a6f6a073a       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   596988a1fea3c
	2450635d2a98d       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   cf62692852b50
	e326c378c206a       a634548d10b03                                                                                    About a minute ago   Running             kube-proxy                  0                   b7d59273fe68f
	735198f9d479d       34cdf99b1bb3b                                                                                    About a minute ago   Running             kube-controller-manager     0                   e6f67c8b51b50
	eda9cd41cb249       d3377ffb7177c                                                                                    About a minute ago   Running             kube-apiserver              0                   0b59256d3a69f
	e9f55dbf4dfdd       5d725196c1f47                                                                                    About a minute ago   Running             kube-scheduler              0                   6ef79d3852256
	e9bc7a6b60cbb       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   78b948b5d031f
	
	* 
	* ==> coredns [2450635d2a98] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220629120335-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220629120335-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=default-k8s-different-port-20220629120335-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T12_10_02_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:09:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220629120335-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:11:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:09:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:09:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:09:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 29 Jun 2022 19:11:20 +0000   Wed, 29 Jun 2022 19:10:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20220629120335-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                bc856e45-c15a-405f-9901-feecde9d5756
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-54rws                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     71s
	  kube-system                 etcd-default-k8s-different-port-20220629120335-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220629120335-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220629120335-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-42mtt                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220629120335-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 metrics-server-5c6f97fb75-smdz9                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-tcmv4                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-q9lqr                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 70s   kube-proxy       
	  Normal  Starting                 85s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  85s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientPID
	  Normal  NodeReady                75s   kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeReady
	  Normal  RegisteredNode           72s   node-controller  Node default-k8s-different-port-20220629120335-24356 event: Registered Node default-k8s-different-port-20220629120335-24356 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node default-k8s-different-port-20220629120335-24356 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [e9bc7a6b60cb] <==
	* {"level":"info","ts":"2022-06-29T19:09:56.848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T19:09:56.848Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T19:09:56.849Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20220629120335-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.645Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:09:57.646Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T19:09:57.646Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:09:57.651Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:09:57.651Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:11:27 up  1:19,  0 users,  load average: 1.51, 1.06, 1.20
	Linux default-k8s-different-port-20220629120335-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [eda9cd41cb24] <==
	* I0629 19:10:02.198859       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 19:10:02.205721       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0629 19:10:02.213348       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 19:10:02.280558       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 19:10:15.994943       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:10:16.643414       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0629 19:10:17.291683       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 19:10:18.857767       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.109.122.127]
	E0629 19:10:18.936113       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0629 19:10:19.135852       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.8.165]
	I0629 19:10:19.152536       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.103.141.115]
	W0629 19:10:19.751832       1 handler_proxy.go:102] no RequestInfo found in the context
	W0629 19:10:19.751849       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:10:19.751885       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:10:19.751906       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0629 19:10:19.751930       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:10:19.753234       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:11:19.997970       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:11:19.998007       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:11:19.998013       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:11:19.998544       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:11:19.998561       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:11:19.999017       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [735198f9d479] <==
	* I0629 19:10:18.741383       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 19:10:18.743987       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0629 19:10:18.749927       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0629 19:10:18.762280       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-smdz9"
	I0629 19:10:18.852586       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 19:10:18.858390       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:10:18.864027       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0629 19:10:18.864140       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:10:18.869008       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.869461       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:10:18.869507       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:10:18.872913       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.933310       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:10:18.933641       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0629 19:10:18.935238       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.935427       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0629 19:10:18.940421       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0629 19:10:18.940472       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0629 19:10:19.047900       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-tcmv4"
	I0629 19:10:19.048256       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-q9lqr"
	W0629 19:10:25.247177       1 endpointslice_controller.go:302] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: EndpointSlice informer cache is out of date
	E0629 19:10:45.942912       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:10:46.353460       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0629 19:11:20.216646       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0629 19:11:20.223631       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e326c378c206] <==
	* I0629 19:10:17.192603       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:10:17.192648       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:10:17.192667       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:10:17.287754       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:10:17.287861       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:10:17.287877       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:10:17.287894       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:10:17.287924       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:10:17.288210       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:10:17.288434       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:10:17.288452       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:10:17.289193       1 config.go:444] "Starting node config controller"
	I0629 19:10:17.289230       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:10:17.289194       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:10:17.289629       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:10:17.289204       1 config.go:317] "Starting service config controller"
	I0629 19:10:17.289648       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:10:17.389386       1 shared_informer.go:262] Caches are synced for node config
	I0629 19:10:17.431143       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:10:17.431193       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [e9f55dbf4dfd] <==
	* W0629 19:09:59.444155       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:09:59.444165       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:09:59.444336       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 19:09:59.444367       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 19:09:59.445169       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:09:59.445238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:10:00.294922       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:10:00.294969       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:10:00.317355       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0629 19:10:00.317390       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0629 19:10:00.373382       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 19:10:00.373451       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 19:10:00.381178       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0629 19:10:00.381194       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0629 19:10:00.395903       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 19:10:00.395939       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 19:10:00.396026       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 19:10:00.396056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 19:10:00.441890       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0629 19:10:00.441927       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0629 19:10:00.487132       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 19:10:00.487168       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 19:10:00.591847       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0629 19:10:00.591883       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0629 19:10:01.136652       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 19:05:26 UTC, end at Wed 2022-06-29 19:11:28 UTC. --
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698797    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfrp\" (UniqueName: \"kubernetes.io/projected/60c259ab-57b4-463a-b089-fccaa6d3f6c0-kube-api-access-jxfrp\") pod \"coredns-6d4b75cb6d-54rws\" (UID: \"60c259ab-57b4-463a-b089-fccaa6d3f6c0\") " pod="kube-system/coredns-6d4b75cb6d-54rws"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698837    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46s2\" (UniqueName: \"kubernetes.io/projected/322de8c5-d47e-4bb0-9d7d-ef640626c70c-kube-api-access-s46s2\") pod \"kube-proxy-42mtt\" (UID: \"322de8c5-d47e-4bb0-9d7d-ef640626c70c\") " pod="kube-system/kube-proxy-42mtt"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698861    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f4468363-29b5-4d36-beef-5610f1e1625c-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-tcmv4\" (UID: \"f4468363-29b5-4d36-beef-5610f1e1625c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tcmv4"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.698952    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bc59072d-a402-4441-ace1-1ade0e3b7e2f-tmp\") pod \"storage-provisioner\" (UID: \"bc59072d-a402-4441-ace1-1ade0e3b7e2f\") " pod="kube-system/storage-provisioner"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699086    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lxr\" (UniqueName: \"kubernetes.io/projected/2661f4fb-d410-4b0b-9abe-0c030e00d8b3-kube-api-access-d6lxr\") pod \"metrics-server-5c6f97fb75-smdz9\" (UID: \"2661f4fb-d410-4b0b-9abe-0c030e00d8b3\") " pod="kube-system/metrics-server-5c6f97fb75-smdz9"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699129    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578j8\" (UniqueName: \"kubernetes.io/projected/bc59072d-a402-4441-ace1-1ade0e3b7e2f-kube-api-access-578j8\") pod \"storage-provisioner\" (UID: \"bc59072d-a402-4441-ace1-1ade0e3b7e2f\") " pod="kube-system/storage-provisioner"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699147    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/513c4ddc-31bf-4472-b555-4f007825f07f-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-q9lqr\" (UID: \"513c4ddc-31bf-4472-b555-4f007825f07f\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-q9lqr"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699178    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/322de8c5-d47e-4bb0-9d7d-ef640626c70c-xtables-lock\") pod \"kube-proxy-42mtt\" (UID: \"322de8c5-d47e-4bb0-9d7d-ef640626c70c\") " pod="kube-system/kube-proxy-42mtt"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699228    9936 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/322de8c5-d47e-4bb0-9d7d-ef640626c70c-lib-modules\") pod \"kube-proxy-42mtt\" (UID: \"322de8c5-d47e-4bb0-9d7d-ef640626c70c\") " pod="kube-system/kube-proxy-42mtt"
	Jun 29 19:11:21 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:21.699277    9936 reconciler.go:157] "Reconciler: start to sync state"
	Jun 29 19:11:22 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:22.840574    9936 request.go:601] Waited for 1.128455817s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jun 29 19:11:22 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:22.921066    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:23 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:23.091691    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:23 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:23.244578    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:23 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:23.503695    9936 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220629120335-24356\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220629120335-24356"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112050    9936 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112110    9936 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112229    9936 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d6lxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-smdz9_kube-system(2661f4fb-d410-4b0b-9abe-0c030e00d8b3): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:24.112257    9936 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-smdz9" podUID=2661f4fb-d410-4b0b-9abe-0c030e00d8b3
	Jun 29 19:11:24 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:24.343685    9936 scope.go:110] "RemoveContainer" containerID="04e7386bad2372abddbca585ae7218086dd2f9460b7e3264509d1d6845fd2962"
	Jun 29 19:11:25 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:25.832692    9936 scope.go:110] "RemoveContainer" containerID="04e7386bad2372abddbca585ae7218086dd2f9460b7e3264509d1d6845fd2962"
	Jun 29 19:11:25 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:25.833009    9936 scope.go:110] "RemoveContainer" containerID="84921e9ab3774b4c024aeba5875cc6bf0ab247d92e70290b5543ce1242e7f06e"
	Jun 29 19:11:25 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:25.833172    9936 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-tcmv4_kubernetes-dashboard(f4468363-29b5-4d36-beef-5610f1e1625c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tcmv4" podUID=f4468363-29b5-4d36-beef-5610f1e1625c
	Jun 29 19:11:26 default-k8s-different-port-20220629120335-24356 kubelet[9936]: I0629 19:11:26.843210    9936 scope.go:110] "RemoveContainer" containerID="84921e9ab3774b4c024aeba5875cc6bf0ab247d92e70290b5543ce1242e7f06e"
	Jun 29 19:11:26 default-k8s-different-port-20220629120335-24356 kubelet[9936]: E0629 19:11:26.843413    9936 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-tcmv4_kubernetes-dashboard(f4468363-29b5-4d36-beef-5610f1e1625c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tcmv4" podUID=f4468363-29b5-4d36-beef-5610f1e1625c
	
	* 
	* ==> kubernetes-dashboard [70de6e61337e] <==
	* 2022/06/29 19:10:32 Using namespace: kubernetes-dashboard
	2022/06/29 19:10:32 Using in-cluster config to connect to apiserver
	2022/06/29 19:10:32 Using secret token for csrf signing
	2022/06/29 19:10:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/29 19:10:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/06/29 19:10:32 Successful initial request to the apiserver, version: v1.24.2
	2022/06/29 19:10:32 Generating JWE encryption key
	2022/06/29 19:10:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/06/29 19:10:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/06/29 19:10:32 Initializing JWE encryption key from synchronized object
	2022/06/29 19:10:32 Creating in-cluster Sidecar client
	2022/06/29 19:10:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:10:32 Serving insecurely on HTTP port: 9090
	2022/06/29 19:11:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/06/29 19:10:32 Starting overwatch
	
	* 
	* ==> storage-provisioner [9902a6f6a073] <==
	* I0629 19:10:19.339446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:10:19.348022       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:10:19.348072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:10:19.354680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:10:19.354977       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220629120335-24356_958d464d-0577-4625-be7c-ed7ea2c028c3!
	I0629 19:10:19.355688       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46e31cb1-36ec-437b-bd54-43b2929c0a6b", APIVersion:"v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220629120335-24356_958d464d-0577-4625-be7c-ed7ea2c028c3 became leader
	I0629 19:10:19.455188       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220629120335-24356_958d464d-0577-4625-be7c-ed7ea2c028c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-smdz9
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 describe pod metrics-server-5c6f97fb75-smdz9
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220629120335-24356 describe pod metrics-server-5c6f97fb75-smdz9: exit status 1 (290.659429ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-smdz9" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220629120335-24356 describe pod metrics-server-5c6f97fb75-smdz9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0629 12:10:58.652776   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:11:14.391798   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:11:59.538297   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 12:12:00.683298   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:12:18.332710   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:13:46.702407   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:14:24.495985   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:14:32.694354   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:14:59.651660   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 12:15:00.899896   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:00.906403   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:00.918241   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:00.940437   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:00.981941   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:01.062263   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:01.224501   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:01.544674   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:15:02.184812   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:03.466979   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:06.027747   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:15:08.807269   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 12:15:11.148005   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:15:21.390572   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:15:41.871494   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:15:47.054232   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 12:15:55.775503   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:15:58.660924   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:16:07.894516   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 12:16:14.400432   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:16:22.834996   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0629 12:16:59.546860   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 12:17:00.691971   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60325/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
E0629 12:17:18.339760   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0629 12:17:44.757652   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0629 12:18:46.708489   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0629 12:18:50.139721   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0629 12:19:24.503399   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0629 12:19:32.700517   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0629 12:19:59.661563   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (432.499996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-20220629114717-24356" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220629114717-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220629114717-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.045µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220629114717-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220629114717-24356
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220629114717-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2",
	        "Created": "2022-06-29T18:47:24.686705454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T18:53:02.298159951Z",
	            "FinishedAt": "2022-06-29T18:52:59.492186161Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2/b1f5e01895cc1103306679d3533ef11cedc6b295be9176de1584494d8e6541b2-json.log",
	        "Name": "/old-k8s-version-20220629114717-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220629114717-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220629114717-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8bbb3a836ae906780806bd799b3e65882c687028377353ae9c79c7c4e6a3132/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220629114717-24356",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220629114717-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220629114717-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220629114717-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f01a004add6a38bbd2eeef63591d683ecdc0a86e7e09d3f450b9f36251384a44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60321"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60322"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60323"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60324"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60325"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f01a004add6a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220629114717-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b1f5e01895cc",
	                        "old-k8s-version-20220629114717-24356"
	                    ],
	                    "NetworkID": "7e2ec4ec0dd8da4d477d55acc03296107258203e7a7a266adf169e3b0ee9c64c",
	                    "EndpointID": "5c3ab2122cf8bbb30617dcaafec5da849a4b6aecffda698851a0bf59a65b2b47",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (425.087635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220629114717-24356 logs -n 25
E0629 12:20:00.910045   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/default-k8s-different-port-20220629120335-24356/client.crt: no such file or directory
E0629 12:20:03.748077   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220629114717-24356 logs -n 25: (3.557887338s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | disable-driver-mounts-20220629120335-24356                 |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:04 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	| start   | -p newest-cni-20220629121133-24356 --memory=2200           | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:12 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220629121133-24356 --memory=2200           | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:13 PDT | 29 Jun 22 12:13 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:13 PDT | 29 Jun 22 12:13 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:13 PDT | 29 Jun 22 12:13 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 12:12:29
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 12:12:29.588569   41733 out.go:296] Setting OutFile to fd 1 ...
	I0629 12:12:29.588742   41733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:12:29.588747   41733 out.go:309] Setting ErrFile to fd 2...
	I0629 12:12:29.588751   41733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:12:29.589081   41733 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 12:12:29.589351   41733 out.go:303] Setting JSON to false
	I0629 12:12:29.604054   41733 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11517,"bootTime":1656518432,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 12:12:29.604211   41733 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 12:12:29.626180   41733 out.go:177] * [newest-cni-20220629121133-24356] minikube v1.26.0 on Darwin 12.4
	I0629 12:12:29.668306   41733 notify.go:193] Checking for updates...
	I0629 12:12:29.689036   41733 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 12:12:29.731359   41733 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:29.752253   41733 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 12:12:29.773342   41733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 12:12:29.794519   41733 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 12:12:29.817018   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:29.817692   41733 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 12:12:29.888455   41733 docker.go:137] docker version: linux-20.10.16
	I0629 12:12:29.888591   41733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:12:30.011986   41733 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:12:29.950877572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:12:30.054549   41733 out.go:177] * Using the docker driver based on existing profile
	I0629 12:12:30.075406   41733 start.go:284] selected driver: docker
	I0629 12:12:30.075423   41733 start.go:808] validating driver "docker" against &{Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:30.075522   41733 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 12:12:30.078607   41733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:12:30.200514   41733 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:12:30.14084278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:12:30.200716   41733 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0629 12:12:30.200733   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:30.200742   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:30.200751   41733 start_flags.go:310] config:
	{Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:30.222917   41733 out.go:177] * Starting control plane node newest-cni-20220629121133-24356 in cluster newest-cni-20220629121133-24356
	I0629 12:12:30.244330   41733 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 12:12:30.265398   41733 out.go:177] * Pulling base image ...
	I0629 12:12:30.308582   41733 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:12:30.308633   41733 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 12:12:30.308662   41733 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 12:12:30.308690   41733 cache.go:57] Caching tarball of preloaded images
	I0629 12:12:30.308864   41733 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 12:12:30.308882   41733 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 12:12:30.309747   41733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/config.json ...
	I0629 12:12:30.374617   41733 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 12:12:30.374655   41733 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 12:12:30.374668   41733 cache.go:208] Successfully downloaded all kic artifacts
	I0629 12:12:30.374734   41733 start.go:352] acquiring machines lock for newest-cni-20220629121133-24356: {Name:mk042a3b5f3c7fb19f5a27cdd0c4d3bdf872dc19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 12:12:30.374833   41733 start.go:356] acquired machines lock for "newest-cni-20220629121133-24356" in 79.691µs
	I0629 12:12:30.374856   41733 start.go:94] Skipping create...Using existing machine configuration
	I0629 12:12:30.374862   41733 fix.go:55] fixHost starting: 
	I0629 12:12:30.375085   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:30.442031   41733 fix.go:103] recreateIfNeeded on newest-cni-20220629121133-24356: state=Stopped err=<nil>
	W0629 12:12:30.442065   41733 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 12:12:30.464074   41733 out.go:177] * Restarting existing docker container for "newest-cni-20220629121133-24356" ...
	I0629 12:12:30.486024   41733 cli_runner.go:164] Run: docker start newest-cni-20220629121133-24356
	I0629 12:12:30.850374   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:30.924181   41733 kic.go:416] container "newest-cni-20220629121133-24356" state is running.
	I0629 12:12:30.925115   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:31.006727   41733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/config.json ...
	I0629 12:12:31.007237   41733 machine.go:88] provisioning docker machine ...
	I0629 12:12:31.007269   41733 ubuntu.go:169] provisioning hostname "newest-cni-20220629121133-24356"
	I0629 12:12:31.007380   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.083305   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.083491   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.083504   41733 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220629121133-24356 && echo "newest-cni-20220629121133-24356" | sudo tee /etc/hostname
	I0629 12:12:31.211242   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220629121133-24356
	
	I0629 12:12:31.211315   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.286171   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.286391   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.286414   41733 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220629121133-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220629121133-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220629121133-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 12:12:31.404993   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:12:31.405015   41733 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 12:12:31.405050   41733 ubuntu.go:177] setting up certificates
	I0629 12:12:31.405062   41733 provision.go:83] configureAuth start
	I0629 12:12:31.405134   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:31.479685   41733 provision.go:138] copyHostCerts
	I0629 12:12:31.479785   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 12:12:31.479795   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 12:12:31.479881   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 12:12:31.480083   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 12:12:31.480095   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 12:12:31.480153   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 12:12:31.480301   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 12:12:31.480307   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 12:12:31.480382   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 12:12:31.480500   41733 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220629121133-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220629121133-24356]
	I0629 12:12:31.553993   41733 provision.go:172] copyRemoteCerts
	I0629 12:12:31.554070   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 12:12:31.554128   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.632422   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:31.719010   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 12:12:31.736812   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0629 12:12:31.754703   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 12:12:31.775146   41733 provision.go:86] duration metric: configureAuth took 370.060143ms
	I0629 12:12:31.775160   41733 ubuntu.go:193] setting minikube options for container-runtime
	I0629 12:12:31.775316   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:31.775378   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.847694   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.847864   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.847875   41733 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 12:12:31.967172   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 12:12:31.967183   41733 ubuntu.go:71] root file system type: overlay
	I0629 12:12:31.967317   41733 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 12:12:31.967387   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.037988   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:32.038135   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:32.038189   41733 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 12:12:32.167065   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 12:12:32.167155   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.238743   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:32.238893   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:32.238905   41733 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 12:12:32.360199   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:12:32.360216   41733 machine.go:91] provisioned docker machine in 1.352928421s
	I0629 12:12:32.360226   41733 start.go:306] post-start starting for "newest-cni-20220629121133-24356" (driver="docker")
	I0629 12:12:32.360231   41733 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 12:12:32.360309   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 12:12:32.360361   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.431487   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.517761   41733 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 12:12:32.521520   41733 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 12:12:32.521537   41733 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 12:12:32.521543   41733 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 12:12:32.521548   41733 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 12:12:32.521559   41733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 12:12:32.521666   41733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 12:12:32.521801   41733 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 12:12:32.521971   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 12:12:32.529745   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:12:32.546093   41733 start.go:309] post-start completed in 185.852538ms
	I0629 12:12:32.546163   41733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 12:12:32.546210   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.617116   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.700718   41733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 12:12:32.705139   41733 fix.go:57] fixHost completed within 2.33019891s
	I0629 12:12:32.705152   41733 start.go:81] releasing machines lock for "newest-cni-20220629121133-24356", held for 2.330240179s
	I0629 12:12:32.705224   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:32.776217   41733 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 12:12:32.776227   41733 ssh_runner.go:195] Run: systemctl --version
	I0629 12:12:32.776278   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.776310   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.852787   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.854483   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:33.421714   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 12:12:33.429145   41733 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0629 12:12:33.441573   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:33.505252   41733 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 12:12:33.580689   41733 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 12:12:33.591697   41733 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 12:12:33.591757   41733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 12:12:33.601297   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 12:12:33.613993   41733 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 12:12:33.679329   41733 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 12:12:33.744434   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:33.812377   41733 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 12:12:34.075341   41733 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 12:12:34.147333   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:34.213850   41733 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 12:12:34.223490   41733 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 12:12:34.223554   41733 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 12:12:34.227483   41733 start.go:468] Will wait 60s for crictl version
	I0629 12:12:34.227524   41733 ssh_runner.go:195] Run: sudo crictl version
	I0629 12:12:34.255687   41733 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 12:12:34.255756   41733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:12:34.290892   41733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:12:34.367971   41733 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 12:12:34.368104   41733 cli_runner.go:164] Run: docker exec -t newest-cni-20220629121133-24356 dig +short host.docker.internal
	I0629 12:12:34.494783   41733 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 12:12:34.494880   41733 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 12:12:34.499195   41733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:12:34.508835   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:34.602759   41733 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0629 12:12:34.623818   41733 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:12:34.623948   41733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:12:34.654471   41733 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 12:12:34.654492   41733 docker.go:533] Images already preloaded, skipping extraction
	I0629 12:12:34.654556   41733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:12:34.685516   41733 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 12:12:34.685540   41733 cache_images.go:84] Images are preloaded, skipping loading
	I0629 12:12:34.685619   41733 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 12:12:34.759279   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:34.759290   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:34.759307   41733 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0629 12:12:34.759324   41733 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220629121133-24356 NodeName:newest-cni-20220629121133-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 12:12:34.759449   41733 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220629121133-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 12:12:34.759532   41733 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220629121133-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 12:12:34.759600   41733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 12:12:34.767268   41733 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 12:12:34.767320   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 12:12:34.774536   41733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0629 12:12:34.787443   41733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 12:12:34.799855   41733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0629 12:12:34.812169   41733 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 12:12:34.815908   41733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:12:34.825528   41733 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356 for IP: 192.168.67.2
	I0629 12:12:34.825648   41733 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 12:12:34.825704   41733 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 12:12:34.825782   41733 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/client.key
	I0629 12:12:34.825849   41733 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.key.c7fa3a9e
	I0629 12:12:34.825919   41733 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.key
	I0629 12:12:34.826130   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 12:12:34.826169   41733 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 12:12:34.826180   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 12:12:34.826212   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 12:12:34.826244   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 12:12:34.826274   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 12:12:34.826337   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:12:34.826873   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 12:12:34.843557   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 12:12:34.860588   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 12:12:34.877409   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 12:12:34.893984   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 12:12:34.910737   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 12:12:34.927624   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 12:12:34.944443   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 12:12:34.961512   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 12:12:34.978266   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 12:12:34.995472   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 12:12:35.012505   41733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 12:12:35.024964   41733 ssh_runner.go:195] Run: openssl version
	I0629 12:12:35.030215   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 12:12:35.038129   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.042019   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.042061   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.047267   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 12:12:35.054538   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 12:12:35.062220   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.066203   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.066240   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.071307   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 12:12:35.078467   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 12:12:35.086274   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.090276   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.090313   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.095533   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 12:12:35.102606   41733 kubeadm.go:395] StartCluster: {Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:35.102713   41733 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:12:35.132448   41733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 12:12:35.140266   41733 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 12:12:35.140281   41733 kubeadm.go:626] restartCluster start
	I0629 12:12:35.140327   41733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 12:12:35.146994   41733 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.147056   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:35.219650   41733 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220629121133-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:35.219829   41733 kubeconfig.go:127] "newest-cni-20220629121133-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 12:12:35.220162   41733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:35.221494   41733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 12:12:35.229173   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.229229   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.237398   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.438653   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.438806   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.449653   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.638446   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.638653   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.649603   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.839288   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.839468   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.850410   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.038612   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.038695   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.048631   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.238683   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.238815   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.249963   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.438649   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.438836   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.450067   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.638692   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.638870   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.649564   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.838638   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.838714   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.847331   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.038701   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.038777   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.049187   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.238747   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.238937   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.249608   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.438729   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.438903   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.449567   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.639628   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.639781   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.650435   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.838708   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.838812   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.849567   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.038733   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.038840   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.049254   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.239139   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.239235   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.250125   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.250135   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.250179   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.258469   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.258482   41733 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 12:12:38.258492   41733 kubeadm.go:1092] stopping kube-system containers ...
	I0629 12:12:38.258551   41733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:12:38.289744   41733 docker.go:434] Stopping containers: [b9102467e462 b7ac5a073ab7 1aaad07a6a07 137a44de5e43 995d90c1cfbe 2da50998e266 2c49cd15cdb0 bd178c2d55c0 67eaf5abb356 c6cdb8f06829 c6b7f1c8b2e0 154ec38f5f06 24248b5ec744 3ee0db0d474b 5270423c28e0 fcf2cbbeac73]
	I0629 12:12:38.289817   41733 ssh_runner.go:195] Run: docker stop b9102467e462 b7ac5a073ab7 1aaad07a6a07 137a44de5e43 995d90c1cfbe 2da50998e266 2c49cd15cdb0 bd178c2d55c0 67eaf5abb356 c6cdb8f06829 c6b7f1c8b2e0 154ec38f5f06 24248b5ec744 3ee0db0d474b 5270423c28e0 fcf2cbbeac73
	I0629 12:12:38.320147   41733 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 12:12:38.330428   41733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:12:38.340507   41733 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun 29 19:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 29 19:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun 29 19:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 29 19:11 /etc/kubernetes/scheduler.conf
	
	I0629 12:12:38.340589   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 12:12:38.350519   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 12:12:38.357684   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 12:12:38.364728   41733 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.364780   41733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 12:12:38.371710   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 12:12:38.379123   41733 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.379175   41733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 12:12:38.385993   41733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:12:38.393168   41733 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 12:12:38.393180   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:38.436431   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.589898   41733 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.153415676s)
	I0629 12:12:39.610848   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.777939   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.828297   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.882153   41733 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:12:39.882214   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.422672   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.921240   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.933995   41733 api_server.go:71] duration metric: took 1.05181252s to wait for apiserver process to appear ...
	I0629 12:12:40.934017   41733 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:12:40.934032   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:40.935225   41733 api_server.go:256] stopped: https://127.0.0.1:62538/healthz: Get "https://127.0.0.1:62538/healthz": EOF
	I0629 12:12:41.435446   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:44.555903   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 12:12:44.555920   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 12:12:44.935571   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:44.940934   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:12:44.940951   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:12:45.437041   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:45.444290   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:12:45.444302   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:12:45.935471   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:45.942308   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 200:
	ok
	I0629 12:12:45.952038   41733 api_server.go:140] control plane version: v1.24.2
	I0629 12:12:45.952054   41733 api_server.go:130] duration metric: took 5.017880972s to wait for apiserver health ...
	I0629 12:12:45.952061   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:45.952067   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:45.952076   41733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:12:45.960349   41733 system_pods.go:59] 9 kube-system pods found
	I0629 12:12:45.960372   41733 system_pods.go:61] "coredns-6d4b75cb6d-2gsk5" [c9d7132e-f877-48c6-9493-810c7fdcff0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:45.960384   41733 system_pods.go:61] "coredns-6d4b75cb6d-9wn52" [6cf87e39-b15c-47f7-a015-ff68ce065e5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:45.960388   41733 system_pods.go:61] "etcd-newest-cni-20220629121133-24356" [b398814e-e32a-4de4-88e5-978e1a2d51b7] Running
	I0629 12:12:45.960392   41733 system_pods.go:61] "kube-apiserver-newest-cni-20220629121133-24356" [31de6ac7-bbc5-4f4d-88df-09aea857ccb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 12:12:45.960398   41733 system_pods.go:61] "kube-controller-manager-newest-cni-20220629121133-24356" [b91952e0-8b84-4c7b-a40a-85bc6599941f] Running
	I0629 12:12:45.960403   41733 system_pods.go:61] "kube-proxy-tgvc5" [70f6241f-6d23-4a0d-9d6d-9a51140e9b8d] Running
	I0629 12:12:45.960407   41733 system_pods.go:61] "kube-scheduler-newest-cni-20220629121133-24356" [891e3e1d-be39-482c-872e-822aa00f8f5f] Running
	I0629 12:12:45.960414   41733 system_pods.go:61] "metrics-server-5c6f97fb75-44k7n" [df9e220a-c0e0-4006-860a-2d99b33b1144] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:12:45.960421   41733 system_pods.go:61] "storage-provisioner" [4b4463d8-1274-427c-b999-2b566e5081a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 12:12:45.960425   41733 system_pods.go:74] duration metric: took 8.344088ms to wait for pod list to return data ...
	I0629 12:12:45.960431   41733 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:12:45.964468   41733 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:12:45.964487   41733 node_conditions.go:123] node cpu capacity is 6
	I0629 12:12:45.964496   41733 node_conditions.go:105] duration metric: took 4.060805ms to run NodePressure ...
	I0629 12:12:45.964507   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:46.316106   41733 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:12:46.325031   41733 ops.go:34] apiserver oom_adj: -16
	I0629 12:12:46.325046   41733 kubeadm.go:630] restartCluster took 11.184421012s
	I0629 12:12:46.325056   41733 kubeadm.go:397] StartCluster complete in 11.222120608s
	I0629 12:12:46.325077   41733 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:46.325161   41733 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:46.325817   41733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:46.329466   41733 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220629121133-24356" rescaled to 1
	I0629 12:12:46.329511   41733 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:12:46.329537   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:12:46.329546   41733 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:12:46.374352   41733 out.go:177] * Verifying Kubernetes components...
	I0629 12:12:46.329609   41733 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329610   41733 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329643   41733 addons.go:65] Setting dashboard=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329674   41733 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329796   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:46.395400   41733 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220629121133-24356"
	I0629 12:12:46.395401   41733 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220629121133-24356"
	I0629 12:12:46.395405   41733 addons.go:153] Setting addon dashboard=true in "newest-cni-20220629121133-24356"
	W0629 12:12:46.395449   41733 addons.go:162] addon dashboard should already be in state true
	W0629 12:12:46.395453   41733 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:12:46.395460   41733 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220629121133-24356"
	W0629 12:12:46.395493   41733 addons.go:162] addon metrics-server should already be in state true
	I0629 12:12:46.395511   41733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:12:46.395544   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395553   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395566   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395879   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396688   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396736   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396797   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.449507   41733 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0629 12:12:46.449527   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.551690   41733 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:12:46.522779   41733 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220629121133-24356"
	I0629 12:12:46.589626   41733 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:12:46.626450   41733 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0629 12:12:46.626471   41733 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:12:46.663330   41733 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0629 12:12:46.663339   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:12:46.663381   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.700511   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:12:46.700587   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.737559   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:12:46.775480   41733 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:12:46.737631   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.739078   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.791603   41733 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:12:46.796760   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:12:46.796774   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:12:46.796792   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:46.796852   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.816654   41733 api_server.go:71] duration metric: took 487.096402ms to wait for apiserver process to appear ...
	I0629 12:12:46.816710   41733 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:12:46.816733   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:46.823384   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.826055   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 200:
	ok
	I0629 12:12:46.828540   41733 api_server.go:140] control plane version: v1.24.2
	I0629 12:12:46.828560   41733 api_server.go:130] duration metric: took 11.838984ms to wait for apiserver health ...
	I0629 12:12:46.828572   41733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:12:46.836929   41733 system_pods.go:59] 9 kube-system pods found
	I0629 12:12:46.836954   41733 system_pods.go:61] "coredns-6d4b75cb6d-2gsk5" [c9d7132e-f877-48c6-9493-810c7fdcff0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:46.836967   41733 system_pods.go:61] "coredns-6d4b75cb6d-9wn52" [6cf87e39-b15c-47f7-a015-ff68ce065e5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:46.836979   41733 system_pods.go:61] "etcd-newest-cni-20220629121133-24356" [b398814e-e32a-4de4-88e5-978e1a2d51b7] Running
	I0629 12:12:46.836990   41733 system_pods.go:61] "kube-apiserver-newest-cni-20220629121133-24356" [31de6ac7-bbc5-4f4d-88df-09aea857ccb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 12:12:46.837006   41733 system_pods.go:61] "kube-controller-manager-newest-cni-20220629121133-24356" [b91952e0-8b84-4c7b-a40a-85bc6599941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 12:12:46.837015   41733 system_pods.go:61] "kube-proxy-tgvc5" [70f6241f-6d23-4a0d-9d6d-9a51140e9b8d] Running
	I0629 12:12:46.837022   41733 system_pods.go:61] "kube-scheduler-newest-cni-20220629121133-24356" [891e3e1d-be39-482c-872e-822aa00f8f5f] Running
	I0629 12:12:46.837029   41733 system_pods.go:61] "metrics-server-5c6f97fb75-44k7n" [df9e220a-c0e0-4006-860a-2d99b33b1144] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:12:46.837036   41733 system_pods.go:61] "storage-provisioner" [4b4463d8-1274-427c-b999-2b566e5081a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 12:12:46.837042   41733 system_pods.go:74] duration metric: took 8.464446ms to wait for pod list to return data ...
	I0629 12:12:46.837051   41733 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:12:46.840230   41733 default_sa.go:45] found service account: "default"
	I0629 12:12:46.840247   41733 default_sa.go:55] duration metric: took 3.190141ms for default service account to be created ...
	I0629 12:12:46.840258   41733 kubeadm.go:572] duration metric: took 510.708763ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0629 12:12:46.840271   41733 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:12:46.844218   41733 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:12:46.844236   41733 node_conditions.go:123] node cpu capacity is 6
	I0629 12:12:46.844244   41733 node_conditions.go:105] duration metric: took 3.970296ms to run NodePressure ...
	I0629 12:12:46.844255   41733 start.go:213] waiting for startup goroutines ...
	I0629 12:12:46.873003   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.876793   41733 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:12:46.876815   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:12:46.876899   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.896227   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.940210   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:12:46.962962   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.973206   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:12:46.973219   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:12:46.987916   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:12:46.987927   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:12:47.020597   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:12:47.020612   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:12:47.021888   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:12:47.021898   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:12:47.035048   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:12:47.035063   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:12:47.039533   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:12:47.052647   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:12:47.052659   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:12:47.116954   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:12:47.116967   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:12:47.126958   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:12:47.134818   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:12:47.134831   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:12:47.230278   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:12:47.230295   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:12:47.247331   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:12:47.247345   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:12:47.314734   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:12:47.314759   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:12:47.331421   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:12:47.331437   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:12:47.348713   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:12:48.031150   41733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090881748s)
	I0629 12:12:48.110600   41733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071010288s)
	I0629 12:12:48.110630   41733 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220629121133-24356"
	I0629 12:12:48.266026   41733 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0629 12:12:48.325441   41733 addons.go:414] enableAddons completed in 1.995803691s
	I0629 12:12:48.356437   41733 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:12:48.377748   41733 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220629121133-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 18:53:02 UTC, end at Wed 2022-06-29 19:20:01 UTC. --
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Stopping Docker Application Container Engine...
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.216575736Z" level=info msg="Processing signal 'terminated'"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.217825930Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[131]: time="2022-06-29T18:53:05.218386582Z" level=info msg="Daemon shutdown complete"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: docker.service: Succeeded.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Stopped Docker Application Container Engine.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Starting Docker Application Container Engine...
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.272004427Z" level=info msg="Starting up"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273752497Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273789659Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273812919Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.273823680Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.274963883Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275024151Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275067758Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.275110265Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.278499483Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.281321453Z" level=info msg="Loading containers: start."
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.354206270Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.383916961Z" level=info msg="Loading containers: done."
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.391706828Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.391760406Z" level=info msg="Daemon has completed initialization"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 systemd[1]: Started Docker Application Container Engine.
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.417864571Z" level=info msg="API listen on [::]:2376"
	Jun 29 18:53:05 old-k8s-version-20220629114717-24356 dockerd[427]: time="2022-06-29T18:53:05.420446680Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-06-29T19:20:03Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:20:03 up  1:27,  0 users,  load average: 0.08, 0.32, 0.79
	Linux old-k8s-version-20220629114717-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 18:53:02 UTC, end at Wed 2022-06-29 19:20:04 UTC. --
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1670.
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 kubelet[34123]: I0629 19:20:02.781378   34123 server.go:410] Version: v1.16.0
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 kubelet[34123]: I0629 19:20:02.781746   34123 plugins.go:100] No cloud provider specified.
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 kubelet[34123]: I0629 19:20:02.781760   34123 server.go:773] Client rotation is on, will bootstrap in background
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 kubelet[34123]: I0629 19:20:02.783673   34123 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 kubelet[34123]: W0629 19:20:02.784315   34123 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 kubelet[34123]: W0629 19:20:02.784404   34123 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 kubelet[34123]: F0629 19:20:02.784448   34123 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 29 19:20:02 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1671.
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 kubelet[34135]: I0629 19:20:03.531575   34135 server.go:410] Version: v1.16.0
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 kubelet[34135]: I0629 19:20:03.531748   34135 plugins.go:100] No cloud provider specified.
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 kubelet[34135]: I0629 19:20:03.531758   34135 server.go:773] Client rotation is on, will bootstrap in background
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 kubelet[34135]: I0629 19:20:03.533337   34135 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 kubelet[34135]: W0629 19:20:03.533983   34135 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 kubelet[34135]: W0629 19:20:03.534047   34135 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 kubelet[34135]: F0629 19:20:03.534095   34135 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 29 19:20:03 old-k8s-version-20220629114717-24356 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 12:20:03.749395   42441 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 2 (433.891085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220629114717-24356" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (48.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220629121133-24356 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356: exit status 2 (16.144309246s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356: exit status 2 (16.111018138s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220629121133-24356 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220629121133-24356
helpers_test.go:235: (dbg) docker inspect newest-cni-20220629121133-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587",
	        "Created": "2022-06-29T19:11:40.324323632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T19:12:30.845963709Z",
	            "FinishedAt": "2022-06-29T19:12:28.866662058Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/hostname",
	        "HostsPath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/hosts",
	        "LogPath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587-json.log",
	        "Name": "/newest-cni-20220629121133-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220629121133-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220629121133-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220629121133-24356",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220629121133-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220629121133-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220629121133-24356",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220629121133-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e82b2ca4590db00240a40edf22b6ce7e49158be14e1ff968a3c5de67800ca63",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62539"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62540"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62542"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62538"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0e82b2ca4590",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220629121133-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d71c7c76c5ba",
	                        "newest-cni-20220629121133-24356"
	                    ],
	                    "NetworkID": "004d36dd9a4f8227511c4d2f49c2d5027c0b47da12140bcd2f2bd493925c6fb3",
	                    "EndpointID": "a0bdbbc6d1274c23b7c18e8ab64f564bcafe0dc5cf7fc6713884607ea8896c03",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220629121133-24356 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220629121133-24356 logs -n 25: (3.913857834s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | disable-driver-mounts-20220629120335-24356                 |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:04 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	| start   | -p newest-cni-20220629121133-24356 --memory=2200           | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:12 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220629121133-24356 --memory=2200           | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:13 PDT | 29 Jun 22 12:13 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 12:12:29
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 12:12:29.588569   41733 out.go:296] Setting OutFile to fd 1 ...
	I0629 12:12:29.588742   41733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:12:29.588747   41733 out.go:309] Setting ErrFile to fd 2...
	I0629 12:12:29.588751   41733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:12:29.589081   41733 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 12:12:29.589351   41733 out.go:303] Setting JSON to false
	I0629 12:12:29.604054   41733 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11517,"bootTime":1656518432,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 12:12:29.604211   41733 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 12:12:29.626180   41733 out.go:177] * [newest-cni-20220629121133-24356] minikube v1.26.0 on Darwin 12.4
	I0629 12:12:29.668306   41733 notify.go:193] Checking for updates...
	I0629 12:12:29.689036   41733 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 12:12:29.731359   41733 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:29.752253   41733 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 12:12:29.773342   41733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 12:12:29.794519   41733 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 12:12:29.817018   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:29.817692   41733 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 12:12:29.888455   41733 docker.go:137] docker version: linux-20.10.16
	I0629 12:12:29.888591   41733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:12:30.011986   41733 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:12:29.950877572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:12:30.054549   41733 out.go:177] * Using the docker driver based on existing profile
	I0629 12:12:30.075406   41733 start.go:284] selected driver: docker
	I0629 12:12:30.075423   41733 start.go:808] validating driver "docker" against &{Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:30.075522   41733 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 12:12:30.078607   41733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:12:30.200514   41733 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:12:30.14084278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:12:30.200716   41733 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0629 12:12:30.200733   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:30.200742   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:30.200751   41733 start_flags.go:310] config:
	{Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:30.222917   41733 out.go:177] * Starting control plane node newest-cni-20220629121133-24356 in cluster newest-cni-20220629121133-24356
	I0629 12:12:30.244330   41733 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 12:12:30.265398   41733 out.go:177] * Pulling base image ...
	I0629 12:12:30.308582   41733 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:12:30.308633   41733 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 12:12:30.308662   41733 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 12:12:30.308690   41733 cache.go:57] Caching tarball of preloaded images
	I0629 12:12:30.308864   41733 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 12:12:30.308882   41733 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 12:12:30.309747   41733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/config.json ...
	I0629 12:12:30.374617   41733 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 12:12:30.374655   41733 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 12:12:30.374668   41733 cache.go:208] Successfully downloaded all kic artifacts
	I0629 12:12:30.374734   41733 start.go:352] acquiring machines lock for newest-cni-20220629121133-24356: {Name:mk042a3b5f3c7fb19f5a27cdd0c4d3bdf872dc19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 12:12:30.374833   41733 start.go:356] acquired machines lock for "newest-cni-20220629121133-24356" in 79.691µs
	I0629 12:12:30.374856   41733 start.go:94] Skipping create...Using existing machine configuration
	I0629 12:12:30.374862   41733 fix.go:55] fixHost starting: 
	I0629 12:12:30.375085   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:30.442031   41733 fix.go:103] recreateIfNeeded on newest-cni-20220629121133-24356: state=Stopped err=<nil>
	W0629 12:12:30.442065   41733 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 12:12:30.464074   41733 out.go:177] * Restarting existing docker container for "newest-cni-20220629121133-24356" ...
	I0629 12:12:30.486024   41733 cli_runner.go:164] Run: docker start newest-cni-20220629121133-24356
	I0629 12:12:30.850374   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:30.924181   41733 kic.go:416] container "newest-cni-20220629121133-24356" state is running.
	I0629 12:12:30.925115   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:31.006727   41733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/config.json ...
	I0629 12:12:31.007237   41733 machine.go:88] provisioning docker machine ...
	I0629 12:12:31.007269   41733 ubuntu.go:169] provisioning hostname "newest-cni-20220629121133-24356"
	I0629 12:12:31.007380   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.083305   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.083491   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.083504   41733 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220629121133-24356 && echo "newest-cni-20220629121133-24356" | sudo tee /etc/hostname
	I0629 12:12:31.211242   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220629121133-24356
	
	I0629 12:12:31.211315   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.286171   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.286391   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.286414   41733 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220629121133-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220629121133-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220629121133-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 12:12:31.404993   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:12:31.405015   41733 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 12:12:31.405050   41733 ubuntu.go:177] setting up certificates
	I0629 12:12:31.405062   41733 provision.go:83] configureAuth start
	I0629 12:12:31.405134   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:31.479685   41733 provision.go:138] copyHostCerts
	I0629 12:12:31.479785   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 12:12:31.479795   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 12:12:31.479881   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 12:12:31.480083   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 12:12:31.480095   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 12:12:31.480153   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 12:12:31.480301   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 12:12:31.480307   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 12:12:31.480382   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 12:12:31.480500   41733 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220629121133-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220629121133-24356]
	I0629 12:12:31.553993   41733 provision.go:172] copyRemoteCerts
	I0629 12:12:31.554070   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 12:12:31.554128   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.632422   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:31.719010   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 12:12:31.736812   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0629 12:12:31.754703   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 12:12:31.775146   41733 provision.go:86] duration metric: configureAuth took 370.060143ms
	I0629 12:12:31.775160   41733 ubuntu.go:193] setting minikube options for container-runtime
	I0629 12:12:31.775316   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:31.775378   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.847694   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.847864   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.847875   41733 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 12:12:31.967172   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 12:12:31.967183   41733 ubuntu.go:71] root file system type: overlay
	I0629 12:12:31.967317   41733 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 12:12:31.967387   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.037988   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:32.038135   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:32.038189   41733 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 12:12:32.167065   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 12:12:32.167155   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.238743   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:32.238893   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:32.238905   41733 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 12:12:32.360199   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:12:32.360216   41733 machine.go:91] provisioned docker machine in 1.352928421s
	I0629 12:12:32.360226   41733 start.go:306] post-start starting for "newest-cni-20220629121133-24356" (driver="docker")
	I0629 12:12:32.360231   41733 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 12:12:32.360309   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 12:12:32.360361   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.431487   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.517761   41733 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 12:12:32.521520   41733 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 12:12:32.521537   41733 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 12:12:32.521543   41733 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 12:12:32.521548   41733 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 12:12:32.521559   41733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 12:12:32.521666   41733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 12:12:32.521801   41733 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 12:12:32.521971   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 12:12:32.529745   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:12:32.546093   41733 start.go:309] post-start completed in 185.852538ms
	I0629 12:12:32.546163   41733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 12:12:32.546210   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.617116   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.700718   41733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 12:12:32.705139   41733 fix.go:57] fixHost completed within 2.33019891s
	I0629 12:12:32.705152   41733 start.go:81] releasing machines lock for "newest-cni-20220629121133-24356", held for 2.330240179s
	I0629 12:12:32.705224   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:32.776217   41733 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 12:12:32.776227   41733 ssh_runner.go:195] Run: systemctl --version
	I0629 12:12:32.776278   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.776310   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.852787   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.854483   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:33.421714   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 12:12:33.429145   41733 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0629 12:12:33.441573   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:33.505252   41733 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 12:12:33.580689   41733 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 12:12:33.591697   41733 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 12:12:33.591757   41733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 12:12:33.601297   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 12:12:33.613993   41733 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 12:12:33.679329   41733 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 12:12:33.744434   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:33.812377   41733 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 12:12:34.075341   41733 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 12:12:34.147333   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:34.213850   41733 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 12:12:34.223490   41733 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 12:12:34.223554   41733 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 12:12:34.227483   41733 start.go:468] Will wait 60s for crictl version
	I0629 12:12:34.227524   41733 ssh_runner.go:195] Run: sudo crictl version
	I0629 12:12:34.255687   41733 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 12:12:34.255756   41733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:12:34.290892   41733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:12:34.367971   41733 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 12:12:34.368104   41733 cli_runner.go:164] Run: docker exec -t newest-cni-20220629121133-24356 dig +short host.docker.internal
	I0629 12:12:34.494783   41733 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 12:12:34.494880   41733 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 12:12:34.499195   41733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:12:34.508835   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:34.602759   41733 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0629 12:12:34.623818   41733 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:12:34.623948   41733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:12:34.654471   41733 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 12:12:34.654492   41733 docker.go:533] Images already preloaded, skipping extraction
	I0629 12:12:34.654556   41733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:12:34.685516   41733 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 12:12:34.685540   41733 cache_images.go:84] Images are preloaded, skipping loading
	I0629 12:12:34.685619   41733 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 12:12:34.759279   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:34.759290   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:34.759307   41733 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0629 12:12:34.759324   41733 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220629121133-24356 NodeName:newest-cni-20220629121133-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 12:12:34.759449   41733 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220629121133-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 12:12:34.759532   41733 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220629121133-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 12:12:34.759600   41733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 12:12:34.767268   41733 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 12:12:34.767320   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 12:12:34.774536   41733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0629 12:12:34.787443   41733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 12:12:34.799855   41733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0629 12:12:34.812169   41733 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 12:12:34.815908   41733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:12:34.825528   41733 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356 for IP: 192.168.67.2
	I0629 12:12:34.825648   41733 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 12:12:34.825704   41733 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 12:12:34.825782   41733 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/client.key
	I0629 12:12:34.825849   41733 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.key.c7fa3a9e
	I0629 12:12:34.825919   41733 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.key
	I0629 12:12:34.826130   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 12:12:34.826169   41733 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 12:12:34.826180   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 12:12:34.826212   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 12:12:34.826244   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 12:12:34.826274   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 12:12:34.826337   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:12:34.826873   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 12:12:34.843557   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 12:12:34.860588   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 12:12:34.877409   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 12:12:34.893984   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 12:12:34.910737   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 12:12:34.927624   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 12:12:34.944443   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 12:12:34.961512   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 12:12:34.978266   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 12:12:34.995472   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 12:12:35.012505   41733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 12:12:35.024964   41733 ssh_runner.go:195] Run: openssl version
	I0629 12:12:35.030215   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 12:12:35.038129   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.042019   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.042061   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.047267   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 12:12:35.054538   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 12:12:35.062220   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.066203   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.066240   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.071307   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 12:12:35.078467   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 12:12:35.086274   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.090276   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.090313   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.095533   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 12:12:35.102606   41733 kubeadm.go:395] StartCluster: {Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:35.102713   41733 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:12:35.132448   41733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 12:12:35.140266   41733 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 12:12:35.140281   41733 kubeadm.go:626] restartCluster start
	I0629 12:12:35.140327   41733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 12:12:35.146994   41733 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.147056   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:35.219650   41733 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220629121133-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:35.219829   41733 kubeconfig.go:127] "newest-cni-20220629121133-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 12:12:35.220162   41733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:35.221494   41733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 12:12:35.229173   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.229229   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.237398   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.438653   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.438806   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.449653   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.638446   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.638653   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.649603   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.839288   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.839468   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.850410   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.038612   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.038695   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.048631   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.238683   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.238815   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.249963   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.438649   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.438836   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.450067   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.638692   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.638870   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.649564   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.838638   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.838714   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.847331   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.038701   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.038777   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.049187   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.238747   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.238937   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.249608   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.438729   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.438903   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.449567   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.639628   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.639781   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.650435   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.838708   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.838812   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.849567   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.038733   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.038840   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.049254   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.239139   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.239235   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.250125   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.250135   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.250179   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.258469   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.258482   41733 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 12:12:38.258492   41733 kubeadm.go:1092] stopping kube-system containers ...
	I0629 12:12:38.258551   41733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:12:38.289744   41733 docker.go:434] Stopping containers: [b9102467e462 b7ac5a073ab7 1aaad07a6a07 137a44de5e43 995d90c1cfbe 2da50998e266 2c49cd15cdb0 bd178c2d55c0 67eaf5abb356 c6cdb8f06829 c6b7f1c8b2e0 154ec38f5f06 24248b5ec744 3ee0db0d474b 5270423c28e0 fcf2cbbeac73]
	I0629 12:12:38.289817   41733 ssh_runner.go:195] Run: docker stop b9102467e462 b7ac5a073ab7 1aaad07a6a07 137a44de5e43 995d90c1cfbe 2da50998e266 2c49cd15cdb0 bd178c2d55c0 67eaf5abb356 c6cdb8f06829 c6b7f1c8b2e0 154ec38f5f06 24248b5ec744 3ee0db0d474b 5270423c28e0 fcf2cbbeac73
	I0629 12:12:38.320147   41733 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 12:12:38.330428   41733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:12:38.340507   41733 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun 29 19:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 29 19:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun 29 19:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 29 19:11 /etc/kubernetes/scheduler.conf
	
	I0629 12:12:38.340589   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 12:12:38.350519   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 12:12:38.357684   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 12:12:38.364728   41733 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.364780   41733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 12:12:38.371710   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 12:12:38.379123   41733 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.379175   41733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 12:12:38.385993   41733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:12:38.393168   41733 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 12:12:38.393180   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:38.436431   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.589898   41733 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.153415676s)
	I0629 12:12:39.610848   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.777939   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.828297   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.882153   41733 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:12:39.882214   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.422672   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.921240   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.933995   41733 api_server.go:71] duration metric: took 1.05181252s to wait for apiserver process to appear ...
	I0629 12:12:40.934017   41733 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:12:40.934032   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:40.935225   41733 api_server.go:256] stopped: https://127.0.0.1:62538/healthz: Get "https://127.0.0.1:62538/healthz": EOF
	I0629 12:12:41.435446   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:44.555903   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 12:12:44.555920   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 12:12:44.935571   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:44.940934   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:12:44.940951   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:12:45.437041   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:45.444290   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:12:45.444302   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:12:45.935471   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:45.942308   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 200:
	ok
	I0629 12:12:45.952038   41733 api_server.go:140] control plane version: v1.24.2
	I0629 12:12:45.952054   41733 api_server.go:130] duration metric: took 5.017880972s to wait for apiserver health ...
	I0629 12:12:45.952061   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:45.952067   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:45.952076   41733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:12:45.960349   41733 system_pods.go:59] 9 kube-system pods found
	I0629 12:12:45.960372   41733 system_pods.go:61] "coredns-6d4b75cb6d-2gsk5" [c9d7132e-f877-48c6-9493-810c7fdcff0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:45.960384   41733 system_pods.go:61] "coredns-6d4b75cb6d-9wn52" [6cf87e39-b15c-47f7-a015-ff68ce065e5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:45.960388   41733 system_pods.go:61] "etcd-newest-cni-20220629121133-24356" [b398814e-e32a-4de4-88e5-978e1a2d51b7] Running
	I0629 12:12:45.960392   41733 system_pods.go:61] "kube-apiserver-newest-cni-20220629121133-24356" [31de6ac7-bbc5-4f4d-88df-09aea857ccb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 12:12:45.960398   41733 system_pods.go:61] "kube-controller-manager-newest-cni-20220629121133-24356" [b91952e0-8b84-4c7b-a40a-85bc6599941f] Running
	I0629 12:12:45.960403   41733 system_pods.go:61] "kube-proxy-tgvc5" [70f6241f-6d23-4a0d-9d6d-9a51140e9b8d] Running
	I0629 12:12:45.960407   41733 system_pods.go:61] "kube-scheduler-newest-cni-20220629121133-24356" [891e3e1d-be39-482c-872e-822aa00f8f5f] Running
	I0629 12:12:45.960414   41733 system_pods.go:61] "metrics-server-5c6f97fb75-44k7n" [df9e220a-c0e0-4006-860a-2d99b33b1144] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:12:45.960421   41733 system_pods.go:61] "storage-provisioner" [4b4463d8-1274-427c-b999-2b566e5081a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 12:12:45.960425   41733 system_pods.go:74] duration metric: took 8.344088ms to wait for pod list to return data ...
	I0629 12:12:45.960431   41733 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:12:45.964468   41733 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:12:45.964487   41733 node_conditions.go:123] node cpu capacity is 6
	I0629 12:12:45.964496   41733 node_conditions.go:105] duration metric: took 4.060805ms to run NodePressure ...
	I0629 12:12:45.964507   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:46.316106   41733 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:12:46.325031   41733 ops.go:34] apiserver oom_adj: -16
	I0629 12:12:46.325046   41733 kubeadm.go:630] restartCluster took 11.184421012s
	I0629 12:12:46.325056   41733 kubeadm.go:397] StartCluster complete in 11.222120608s
	I0629 12:12:46.325077   41733 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:46.325161   41733 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:46.325817   41733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:46.329466   41733 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220629121133-24356" rescaled to 1
	I0629 12:12:46.329511   41733 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:12:46.329537   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:12:46.329546   41733 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:12:46.374352   41733 out.go:177] * Verifying Kubernetes components...
	I0629 12:12:46.329609   41733 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329610   41733 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329643   41733 addons.go:65] Setting dashboard=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329674   41733 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329796   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:46.395400   41733 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220629121133-24356"
	I0629 12:12:46.395401   41733 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220629121133-24356"
	I0629 12:12:46.395405   41733 addons.go:153] Setting addon dashboard=true in "newest-cni-20220629121133-24356"
	W0629 12:12:46.395449   41733 addons.go:162] addon dashboard should already be in state true
	W0629 12:12:46.395453   41733 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:12:46.395460   41733 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220629121133-24356"
	W0629 12:12:46.395493   41733 addons.go:162] addon metrics-server should already be in state true
	I0629 12:12:46.395511   41733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:12:46.395544   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395553   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395566   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395879   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396688   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396736   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396797   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.449507   41733 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0629 12:12:46.449527   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.551690   41733 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:12:46.522779   41733 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220629121133-24356"
	I0629 12:12:46.589626   41733 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:12:46.626450   41733 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0629 12:12:46.626471   41733 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:12:46.663330   41733 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0629 12:12:46.663339   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:12:46.663381   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.700511   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:12:46.700587   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.737559   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:12:46.775480   41733 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:12:46.737631   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.739078   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.791603   41733 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:12:46.796760   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:12:46.796774   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:12:46.796792   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:46.796852   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.816654   41733 api_server.go:71] duration metric: took 487.096402ms to wait for apiserver process to appear ...
	I0629 12:12:46.816710   41733 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:12:46.816733   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:46.823384   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.826055   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 200:
	ok
	I0629 12:12:46.828540   41733 api_server.go:140] control plane version: v1.24.2
	I0629 12:12:46.828560   41733 api_server.go:130] duration metric: took 11.838984ms to wait for apiserver health ...
	I0629 12:12:46.828572   41733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:12:46.836929   41733 system_pods.go:59] 9 kube-system pods found
	I0629 12:12:46.836954   41733 system_pods.go:61] "coredns-6d4b75cb6d-2gsk5" [c9d7132e-f877-48c6-9493-810c7fdcff0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:46.836967   41733 system_pods.go:61] "coredns-6d4b75cb6d-9wn52" [6cf87e39-b15c-47f7-a015-ff68ce065e5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:46.836979   41733 system_pods.go:61] "etcd-newest-cni-20220629121133-24356" [b398814e-e32a-4de4-88e5-978e1a2d51b7] Running
	I0629 12:12:46.836990   41733 system_pods.go:61] "kube-apiserver-newest-cni-20220629121133-24356" [31de6ac7-bbc5-4f4d-88df-09aea857ccb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 12:12:46.837006   41733 system_pods.go:61] "kube-controller-manager-newest-cni-20220629121133-24356" [b91952e0-8b84-4c7b-a40a-85bc6599941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 12:12:46.837015   41733 system_pods.go:61] "kube-proxy-tgvc5" [70f6241f-6d23-4a0d-9d6d-9a51140e9b8d] Running
	I0629 12:12:46.837022   41733 system_pods.go:61] "kube-scheduler-newest-cni-20220629121133-24356" [891e3e1d-be39-482c-872e-822aa00f8f5f] Running
	I0629 12:12:46.837029   41733 system_pods.go:61] "metrics-server-5c6f97fb75-44k7n" [df9e220a-c0e0-4006-860a-2d99b33b1144] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:12:46.837036   41733 system_pods.go:61] "storage-provisioner" [4b4463d8-1274-427c-b999-2b566e5081a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 12:12:46.837042   41733 system_pods.go:74] duration metric: took 8.464446ms to wait for pod list to return data ...
	I0629 12:12:46.837051   41733 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:12:46.840230   41733 default_sa.go:45] found service account: "default"
	I0629 12:12:46.840247   41733 default_sa.go:55] duration metric: took 3.190141ms for default service account to be created ...
	I0629 12:12:46.840258   41733 kubeadm.go:572] duration metric: took 510.708763ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0629 12:12:46.840271   41733 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:12:46.844218   41733 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:12:46.844236   41733 node_conditions.go:123] node cpu capacity is 6
	I0629 12:12:46.844244   41733 node_conditions.go:105] duration metric: took 3.970296ms to run NodePressure ...
	I0629 12:12:46.844255   41733 start.go:213] waiting for startup goroutines ...
	I0629 12:12:46.873003   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.876793   41733 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:12:46.876815   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:12:46.876899   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.896227   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.940210   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:12:46.962962   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.973206   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:12:46.973219   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:12:46.987916   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:12:46.987927   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:12:47.020597   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:12:47.020612   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:12:47.021888   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:12:47.021898   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:12:47.035048   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:12:47.035063   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:12:47.039533   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:12:47.052647   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:12:47.052659   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:12:47.116954   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:12:47.116967   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:12:47.126958   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:12:47.134818   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:12:47.134831   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:12:47.230278   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:12:47.230295   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:12:47.247331   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:12:47.247345   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:12:47.314734   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:12:47.314759   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:12:47.331421   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:12:47.331437   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:12:47.348713   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:12:48.031150   41733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090881748s)
	I0629 12:12:48.110600   41733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071010288s)
	I0629 12:12:48.110630   41733 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220629121133-24356"
	I0629 12:12:48.266026   41733 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0629 12:12:48.325441   41733 addons.go:414] enableAddons completed in 1.995803691s
	I0629 12:12:48.356437   41733 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:12:48.377748   41733 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220629121133-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 19:12:30 UTC, end at Wed 2022-06-29 19:13:26 UTC. --
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.883249123Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.883282800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.883325865Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.883337011Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.884306859Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.884365454Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.884439594Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.884484606Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.887445913Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.891664648Z" level=info msg="Loading containers: start."
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.021476794Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.054020458Z" level=info msg="Loading containers: done."
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.062415310Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.062479308Z" level=info msg="Daemon has completed initialization"
	Jun 29 19:12:34 newest-cni-20220629121133-24356 systemd[1]: Started Docker Application Container Engine.
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.083457179Z" level=info msg="API listen on [::]:2376"
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.088815692Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 29 19:12:46 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:46.474630239Z" level=info msg="ignoring event" container=c6b19ee41ee86496990821bd74a72b1f2eee626fc5d374de8ddcbacec95d8d4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:46 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:46.919020207Z" level=info msg="ignoring event" container=e27793db30c44a3a50a98b2792ae37f2b128af7c03958138e33c97bcea35830b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.155290383Z" level=info msg="ignoring event" container=6df3221e3639de879a8686b8664bf7c5151ba1754f2d1f300e90e09af4b7e69c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.229656147Z" level=info msg="ignoring event" container=b810ddcb5a2c08a114340919625275f43b1bdb996b2dbfbcf11a7ca744fe232d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.876144566Z" level=info msg="ignoring event" container=36f3e6dc6be6575825dea9339a85533cf95ab582f3baa4e662404b8a4e10ec2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.884501978Z" level=info msg="ignoring event" container=ae5600c0854f5f2576b66cfb6cfe69688448c762a4997d8cd40fcdd515018ca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:49 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:49.927468796Z" level=info msg="ignoring event" container=c0c456ff668cd9b48cec7b0a5990c8cabe875983db4ae12d648d896f95e34114 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:49 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:49.927680592Z" level=info msg="ignoring event" container=4a708cf64ff1701f2da2e8aa1540beec46b1eb0d3b15a0495bfe804b38b79ca1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	832875ac54550       6e38f40d628db       40 seconds ago       Running             storage-provisioner       1                   dfd4bdbabf56a
	5ceda341afbb9       a634548d10b03       41 seconds ago       Running             kube-proxy                1                   49c54f147a8f0
	cae530751925f       aebe758cef4cd       46 seconds ago       Running             etcd                      1                   c9f793b29f3c4
	cf60daa2910e8       34cdf99b1bb3b       46 seconds ago       Running             kube-controller-manager   1                   83b10ece13d15
	e6b78ff80d34b       d3377ffb7177c       46 seconds ago       Running             kube-apiserver            1                   6f6d81d2f7f11
	4d7ec14e3d562       5d725196c1f47       46 seconds ago       Running             kube-scheduler            1                   2323fcceb950d
	b9102467e4628       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   1aaad07a6a073
	995d90c1cfbed       a634548d10b03       About a minute ago   Exited              kube-proxy                0                   bd178c2d55c0a
	67eaf5abb3561       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   c6cdb8f068299
	c6b7f1c8b2e0f       34cdf99b1bb3b       About a minute ago   Exited              kube-controller-manager   0                   154ec38f5f06b
	24248b5ec7441       d3377ffb7177c       About a minute ago   Exited              kube-apiserver            0                   fcf2cbbeac73f
	3ee0db0d474bc       5d725196c1f47       About a minute ago   Exited              kube-scheduler            0                   5270423c28e0c
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220629121133-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220629121133-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=newest-cni-20220629121133-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T12_12_01_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:11:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220629121133-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:13:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:13:23 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-20220629121133-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                46aaca5c-da45-4fce-b49b-973f0583fbb1
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2gsk5                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     72s
	  kube-system                 etcd-newest-cni-20220629121133-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-newest-cni-20220629121133-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-controller-manager-newest-cni-20220629121133-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-tgvc5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-scheduler-newest-cni-20220629121133-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 metrics-server-5c6f97fb75-44k7n                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientPID     85s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                85s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeReady
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           73s                node-controller  Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller
	  Normal  NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    46s (x4 over 47s)  kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x3 over 47s)  kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  46s (x4 over 47s)  kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet          Node newest-cni-20220629121133-24356 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [67eaf5abb356] <==
	* {"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220629121133-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T19:12:17.155Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T19:12:17.155Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220629121133-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/06/29 19:12:17 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 19:12:17 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T19:12:17.166Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-06-29T19:12:17.168Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:17.170Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:17.170Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220629121133-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [cae530751925] <==
	* {"level":"info","ts":"2022-06-29T19:12:40.956Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-29T19:12:40.956Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-29T19:12:40.957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T19:12:40.958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T19:12:40.958Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T19:12:40.962Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:40.962Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220629121133-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:12:42.854Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:12:42.854Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  19:13:26 up  1:21,  0 users,  load average: 1.07, 1.09, 1.19
	Linux newest-cni-20220629121133-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [24248b5ec744] <==
	* W0629 19:12:18.160266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160028       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160284       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160289       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160289       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.159820       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160311       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160318       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160317       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160333       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160367       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160346       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160348       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160382       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160365       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160394       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160408       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160412       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160429       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160395       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160471       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160492       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160493       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160508       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160549       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [e6b78ff80d34] <==
	* I0629 19:12:44.664307       1 cache.go:39] Caches are synced for autoregister controller
	I0629 19:12:44.670266       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0629 19:12:44.673976       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 19:12:45.328063       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0629 19:12:45.555212       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0629 19:12:45.672553       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:12:45.672591       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:12:45.672631       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:12:45.672564       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:12:45.672667       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:12:45.673726       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0629 19:12:45.837810       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 19:12:46.156993       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 19:12:46.173631       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 19:12:46.253297       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 19:12:46.266053       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0629 19:12:46.272892       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0629 19:12:48.048827       1 controller.go:611] quota admission added evaluator for: namespaces
	I0629 19:12:48.186656       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.123.255]
	I0629 19:12:48.234612       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.35.178]
	I0629 19:13:23.418539       1 controller.go:611] quota admission added evaluator for: endpoints
	I0629 19:13:24.163933       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:13:24.163934       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:13:24.216693       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [c6b7f1c8b2e0] <==
	* I0629 19:12:13.183226       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0629 19:12:13.185685       1 shared_informer.go:262] Caches are synced for taint
	I0629 19:12:13.185757       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0629 19:12:13.185795       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220629121133-24356. Assuming now as a timestamp.
	I0629 19:12:13.185829       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0629 19:12:13.185885       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0629 19:12:13.185965       1 event.go:294] "Event occurred" object="newest-cni-20220629121133-24356" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller"
	I0629 19:12:13.187019       1 range_allocator.go:374] Set node newest-cni-20220629121133-24356 PodCIDR to [192.168.0.0/24]
	I0629 19:12:13.198358       1 shared_informer.go:262] Caches are synced for attach detach
	I0629 19:12:13.204745       1 shared_informer.go:262] Caches are synced for HPA
	I0629 19:12:13.329884       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0629 19:12:13.332668       1 shared_informer.go:262] Caches are synced for cronjob
	I0629 19:12:13.382573       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:12:13.386095       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:12:13.680656       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0629 19:12:13.690185       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0629 19:12:13.795328       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:12:13.878417       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:12:13.878436       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 19:12:13.883509       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tgvc5"
	I0629 19:12:14.181498       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9wn52"
	I0629 19:12:14.264961       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2gsk5"
	I0629 19:12:14.286781       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-9wn52"
	I0629 19:12:16.407221       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 19:12:16.411414       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-44k7n"
	
	* 
	* ==> kube-controller-manager [cf60daa2910e] <==
	* I0629 19:13:23.945427       1 shared_informer.go:262] Caches are synced for GC
	I0629 19:13:23.945427       1 shared_informer.go:262] Caches are synced for daemon sets
	I0629 19:13:23.947762       1 shared_informer.go:262] Caches are synced for taint
	I0629 19:13:23.947814       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0629 19:13:23.947924       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0629 19:13:23.947985       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220629121133-24356. Assuming now as a timestamp.
	I0629 19:13:23.948013       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0629 19:13:23.948041       1 event.go:294] "Event occurred" object="newest-cni-20220629121133-24356" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller"
	I0629 19:13:23.957735       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0629 19:13:23.960044       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0629 19:13:23.962424       1 shared_informer.go:262] Caches are synced for stateful set
	I0629 19:13:24.005429       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:13:24.005659       1 shared_informer.go:262] Caches are synced for cronjob
	I0629 19:13:24.011444       1 shared_informer.go:262] Caches are synced for disruption
	I0629 19:13:24.011458       1 disruption.go:371] Sending events to api server.
	I0629 19:13:24.011912       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0629 19:13:24.012766       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:13:24.013739       1 shared_informer.go:262] Caches are synced for ephemeral
	I0629 19:13:24.167272       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 19:13:24.168548       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0629 19:13:24.316838       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-vd4rr"
	I0629 19:13:24.319553       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-2jh4t"
	I0629 19:13:24.440079       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:13:24.510086       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:13:24.510175       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [5ceda341afbb] <==
	* I0629 19:12:45.815065       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:12:45.815130       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:12:45.815151       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:12:45.833861       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:12:45.834551       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:12:45.834623       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:12:45.834702       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:12:45.834795       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:45.835098       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:45.835671       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:12:45.835770       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:12:45.836545       1 config.go:444] "Starting node config controller"
	I0629 19:12:45.836584       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:12:45.836802       1 config.go:317] "Starting service config controller"
	I0629 19:12:45.836857       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:12:45.840488       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:12:45.840516       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:12:45.840527       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 19:12:45.937671       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:12:45.937817       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [995d90c1cfbe] <==
	* I0629 19:12:15.074508       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:12:15.074581       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:12:15.074601       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:12:15.157156       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:12:15.157228       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:12:15.157237       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:12:15.157248       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:12:15.157276       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:15.157374       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:15.157506       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:12:15.157512       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:12:15.158049       1 config.go:317] "Starting service config controller"
	I0629 19:12:15.158103       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:12:15.158110       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:12:15.158120       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:12:15.158589       1 config.go:444] "Starting node config controller"
	I0629 19:12:15.158613       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:12:15.258230       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 19:12:15.258258       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:12:15.258717       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3ee0db0d474b] <==
	* E0629 19:11:58.258207       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:11:58.258307       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 19:11:58.258339       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 19:11:58.258392       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 19:11:58.258423       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 19:11:58.258813       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 19:11:58.258847       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 19:11:59.093530       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 19:11:59.093568       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 19:11:59.126003       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 19:11:59.126077       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 19:11:59.126003       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0629 19:11:59.126114       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0629 19:11:59.188925       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 19:11:59.188964       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 19:11:59.249913       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0629 19:11:59.249953       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0629 19:11:59.348382       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0629 19:11:59.348435       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 19:11:59.372515       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:11:59.372553       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0629 19:12:02.353684       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 19:12:17.149621       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0629 19:12:17.151223       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0629 19:12:17.151399       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [4d7ec14e3d56] <==
	* W0629 19:12:40.973007       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0629 19:12:42.010252       1 serving.go:348] Generated self-signed cert in-memory
	W0629 19:12:44.564768       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 19:12:44.564804       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 19:12:44.564811       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 19:12:44.564816       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 19:12:44.629627       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 19:12:44.629661       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:12:44.630991       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 19:12:44.631065       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 19:12:44.631038       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 19:12:44.632554       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 19:12:44.732160       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 19:12:30 UTC, end at Wed 2022-06-29 19:13:28 UTC. --
	Jun 29 19:13:27 newest-cni-20220629121133-24356 kubelet[3985]:         
	Jun 29 19:13:27 newest-cni-20220629121133-24356 kubelet[3985]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 19:13:27 newest-cni-20220629121133-24356 kubelet[3985]:         ]
	Jun 29 19:13:27 newest-cni-20220629121133-24356 kubelet[3985]:  > pod="kube-system/coredns-6d4b75cb6d-2gsk5"
	Jun 29 19:13:27 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:27.862564    3985 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-2gsk5_kube-system(c9d7132e-f877-48c6-9493-810c7fdcff0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-2gsk5_kube-system(c9d7132e-f877-48c6-9493-810c7fdcff0c)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"892a319a1f3c58c4c74fc0e894fd5e61f1e7a582696f753649457b122d441350\\\" network for pod \\\"coredns-6d4b75cb6d-2gsk5\\\": networkPlugin cni failed to set up pod \\\"coredns-6d4b75cb6d-2gsk5_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"892a319a1f3c58c4c74fc0e894fd5e61f1e7a582696f753649457b122d441350\\\" network for pod \\\"coredns-6d4b75cb6d-2gsk5\\\": networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-2gsk5_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.14 -j CNI-9d6bf74a7aeec2f2ffdc6366 -m comment --comment name: \\\"crio\\\" id: \\\"892a319a1f3c58c4c74fc0e894fd5e61f1e7a582696f753649457b122d441350\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-9d6bf74a7aeec2f2ffdc6366':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-6d4b75cb6d-2gsk5" podUID=c9d7132e-f877-48c6-9493-810c7fdcff0c
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]: I0629 19:13:28.163673    3985 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="892a319a1f3c58c4c74fc0e894fd5e61f1e7a582696f753649457b122d441350"
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:28.323389    3985 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         rpc error: code = Unknown desc = [failed to set up sandbox container "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" network for pod "metrics-server-5c6f97fb75-44k7n": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-44k7n_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" network for pod "metrics-server-5c6f97fb75-44k7n": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-44k7n_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f417561849af16d4e8ce2c87 -m comment --comment name: "crio" id: "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f417561849af16d4e8ce2c87':No such file or directory
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         ]
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:  >
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:28.323455    3985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         rpc error: code = Unknown desc = [failed to set up sandbox container "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" network for pod "metrics-server-5c6f97fb75-44k7n": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-44k7n_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" network for pod "metrics-server-5c6f97fb75-44k7n": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-44k7n_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f417561849af16d4e8ce2c87 -m comment --comment name: "crio" id: "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f417561849af16d4e8ce2c87':No such file or directory
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         ]
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:  > pod="kube-system/metrics-server-5c6f97fb75-44k7n"
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:28.323504    3985 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         rpc error: code = Unknown desc = [failed to set up sandbox container "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" network for pod "metrics-server-5c6f97fb75-44k7n": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-44k7n_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" network for pod "metrics-server-5c6f97fb75-44k7n": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-44k7n_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f417561849af16d4e8ce2c87 -m comment --comment name: "crio" id: "0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f417561849af16d4e8ce2c87':No such file or directory
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:         ]
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]:  > pod="kube-system/metrics-server-5c6f97fb75-44k7n"
	Jun 29 19:13:28 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:28.323602    3985 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c6f97fb75-44k7n_kube-system(df9e220a-c0e0-4006-860a-2d99b33b1144)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c6f97fb75-44k7n_kube-system(df9e220a-c0e0-4006-860a-2d99b33b1144)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee\\\" network for pod \\\"metrics-server-5c6f97fb75-44k7n\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c6f97fb75-44k7n_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee\\\" network for pod \\\"metrics-server-5c6f97fb75-44k7n\\\": networkPlugin cni failed to teardown p
od \\\"metrics-server-5c6f97fb75-44k7n_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.15 -j CNI-f417561849af16d4e8ce2c87 -m comment --comment name: \\\"crio\\\" id: \\\"0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-f417561849af16d4e8ce2c87':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c6f97fb75-44k7n" podUID=df9e220a-c0e0-4006-860a-2d99b33b1144
	
	* 
	* ==> storage-provisioner [832875ac5455] <==
	* I0629 19:12:46.262104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:12:46.273799       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:12:46.273852       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:13:23.422546       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:13:23.422699       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5b9cc85e-b026-47a8-8664-6ebffd6b3f3b", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220629121133-24356_bed6bbe4-d19c-4bda-b40c-ba33e906122d became leader
	I0629 19:13:23.422940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220629121133-24356_bed6bbe4-d19c-4bda-b40c-ba33e906122d!
	I0629 19:13:23.525158       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220629121133-24356_bed6bbe4-d19c-4bda-b40c-ba33e906122d!
	
	* 
	* ==> storage-provisioner [b9102467e462] <==
	* I0629 19:12:16.998671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:12:17.007477       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:12:17.007541       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:12:17.016576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:12:17.016763       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220629121133-24356_d24149b3-3086-417b-9be2-a4f9c0c96904!
	I0629 19:12:17.017043       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5b9cc85e-b026-47a8-8664-6ebffd6b3f3b", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220629121133-24356_d24149b3-3086-417b-9be2-a4f9c0c96904 became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220629121133-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220629121133-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.116333928s)
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220629121133-24356 describe pod coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220629121133-24356 describe pod coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t: exit status 1 (238.927729ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-2gsk5" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-44k7n" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-vd4rr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-2jh4t" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220629121133-24356 describe pod coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220629121133-24356
helpers_test.go:235: (dbg) docker inspect newest-cni-20220629121133-24356:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587",
	        "Created": "2022-06-29T19:11:40.324323632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-29T19:12:30.845963709Z",
	            "FinishedAt": "2022-06-29T19:12:28.866662058Z"
	        },
	        "Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
	        "ResolvConfPath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/hostname",
	        "HostsPath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/hosts",
	        "LogPath": "/var/lib/docker/containers/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587/d71c7c76c5babd4cceaa3e5f8902c4110f65c51d34ad764fc486008152d70587-json.log",
	        "Name": "/newest-cni-20220629121133-24356",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220629121133-24356:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220629121133-24356",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4-init/diff:/var/lib/docker/overlay2/fffebe0fdfada5807aeb835ff23043496ab70477725ee4f168b630301ac03e45/diff:/var/lib/docker/overlay2/d4eb6d2f34aa8e5c143d900dccdec5da9e3d130567442e6745d4efac5202fe49/diff:/var/lib/docker/overlay2/eb35fadba12ed9c48500d69b77e98e7dd72e90d3de5197d58b370df5b5dca4c7/diff:/var/lib/docker/overlay2/7b63894f671ef1edaa7c3b80a2acbde52dcdb21970e320799b6884e79553ea3e/diff:/var/lib/docker/overlay2/3740b6bc6ff226137eb09a6350d4395dc04bd9012c6c66125dc2ea6b663082cd/diff:/var/lib/docker/overlay2/a2fda66ed4937725e85838baed61cac418abe2ba55b4e664bf944246efcdd371/diff:/var/lib/docker/overlay2/574408913c5c73ee699b85768bbb4c0ce70e697bf6eb623e32017c62e8413acd/diff:/var/lib/docker/overlay2/1cde03c3877bfb18ad0533f814863e3030abec268ff30faceab8815ea7e2daf2/diff:/var/lib/docker/overlay2/52bf889e64b2ea0160f303622d5febb9c52b864e5a6dc2bfa5db90933ccaaa29/diff:/var/lib/docker/overlay2/b131e6
ae4a7a7f5705d087e4001676276e4daa26d6acfc99799bb4992e322410/diff:/var/lib/docker/overlay2/3f5c774f6f46936a974bfc6530b012fda75a59b22450e3342486fe400ab4b531/diff:/var/lib/docker/overlay2/8462528084f0c44a79e421427e0e4bc9ddd7642428c47ff1899d41b265223245/diff:/var/lib/docker/overlay2/cb9765866d13ba37669ec242ea0a1af87c92c7291c716e52037a2ccadc64ac82/diff:/var/lib/docker/overlay2/f0d06e6fa53f3ca9622f1efcfac6fe3fd18d2e5b9e07be3d624b0b9987073e55/diff:/var/lib/docker/overlay2/4ebd12d8b25cff2d3d8a989c047b696088121f0964cc7f94c6d0178ef16e3e1f/diff:/var/lib/docker/overlay2/40e16f5720fd3a8c1c8792aea0ec143af819f19cad845dde40b57ed7e372ab73/diff:/var/lib/docker/overlay2/3ce5ee64ba683c997a13b7ffa65978b4c9652772729737facd794209d49251c3/diff:/var/lib/docker/overlay2/c55c549a78d490ea576942661ba65103ea2992693548217973bb8fa1a5948b74/diff:/var/lib/docker/overlay2/4651b16dbc2e22b8a43dc1154546514f2076168d12f9c108f85fe7c6e60325f0/diff:/var/lib/docker/overlay2/9576343ea03501b15b520a83ffdc675c6d9ecd501f6ffcf6564dd75aa4f2812a/diff:/var/lib/d
ocker/overlay2/635ba7d01f96fd1ec1acabf157f4e5c00cbf80adf65b7f8873e444745fef2c9b/diff:/var/lib/docker/overlay2/6bbe0ce6ca00a7eb5bd7c22def5fcab4ebecab4a0b4cbc5ed236429671a41b6c/diff:/var/lib/docker/overlay2/b335551ba0fcfd6bff6ef5627289041f3083dc338e67b4f4728d4937bb6fb33a/diff:/var/lib/docker/overlay2/58cd90f6ad9016f3c4befb63eac504c9d2f0fc66251c5c9e3348080785d3cec4/diff:/var/lib/docker/overlay2/b7d943a8463e032d405d531846436b89574f10efeea6e4f2df92e3bb0e169d8e/diff:/var/lib/docker/overlay2/e633899f71c18e322af1b75837392bc89fd4275534b5bc70037965b0b80a770d/diff:/var/lib/docker/overlay2/651aabda39b5851bd186e23bc84f1029d819ed8eb032b13ac12f50f3d1486bfb/diff:/var/lib/docker/overlay2/3b137e27694d242a419b3fd2f8605837edfe77dae9462c63c3d7b41538e82591/diff:/var/lib/docker/overlay2/e9d4369b871c47acb146b73f8cbe14b89b0f74027df9117a7dc73f5dee8fee1c/diff:/var/lib/docker/overlay2/9379269362a969b07cc7d7f9faff9fa3b745529df38758733014a5dbe2470775/diff:/var/lib/docker/overlay2/9231c154723fa536d9894f703ec0388448e8611d5a01d54bca3a5b0a0b1
7ffd2/diff:/var/lib/docker/overlay2/9610e37ded5c6da7bd2c8edc56c3ae864637bb354f8ea3d6d1ccee6bd5c2aa7f/diff:/var/lib/docker/overlay2/025ecca5e756b1b8177204df7b2f2567a76dda456b2f1a8e312efd63150a8943/diff:/var/lib/docker/overlay2/7e69089e438e096c36ea0a4a37280fd036841e3287e57635e3407eb58fc0b6da/diff:/var/lib/docker/overlay2/c6d9ef67ed33e64c8ac8c4cdc7c33eb68f5266987969676165cabc2cf2fd346b/diff:/var/lib/docker/overlay2/394627c68237f7993b91eb0c377001630bb2e709dd58f65d899d44a3586dae91/diff:/var/lib/docker/overlay2/0c0c3c94789fc85cd70d9ee2b56d67ce6471d4dced47f21f15152d4edb6bc3e5/diff:/var/lib/docker/overlay2/849809e48c9bcbfe092aa063fcd274f284eeacde89acbb602b439d4cf0aef9b6/diff:/var/lib/docker/overlay2/49c27f0a55f204b161aa2da33ba8004f46cb93bf673975ad1b6286ce659db632/diff:/var/lib/docker/overlay2/a712a8f5cdb2f3840c706296240407405826d2936df034393c1ddf3cf2480b5f/diff:/var/lib/docker/overlay2/47949bfd134ff7a50def5e9b3af3424faf216354d1f157552f3c63c67c2728ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0952f5cb56fcea7cca5d1c8b6783455954e0db8c0831bef54720f80dac3d67b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220629121133-24356",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220629121133-24356/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220629121133-24356",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220629121133-24356",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220629121133-24356",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e82b2ca4590db00240a40edf22b6ce7e49158be14e1ff968a3c5de67800ca63",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62539"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62540"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62541"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62542"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62538"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0e82b2ca4590",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220629121133-24356": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d71c7c76c5ba",
	                        "newest-cni-20220629121133-24356"
	                    ],
	                    "NetworkID": "004d36dd9a4f8227511c4d2f49c2d5027c0b47da12140bcd2f2bd493925c6fb3",
	                    "EndpointID": "a0bdbbc6d1274c23b7c18e8ab64f564bcafe0dc5cf7fc6713884607ea8896c03",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220629121133-24356 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220629121133-24356 logs -n 25: (4.482024535s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 11:57 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |          |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |          |         |         |                     |                     |
	|         | --driver=docker                                            |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:02 PDT | 29 Jun 22 12:02 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | embed-certs-20220629115611-24356                           |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:03 PDT |
	|         | disable-driver-mounts-20220629120335-24356                 |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:03 PDT | 29 Jun 22 12:04 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:05 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:05 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |          |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.2                               |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:10 PDT | 29 Jun 22 12:10 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	| delete  | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:11 PDT |
	|         | default-k8s-different-port-20220629120335-24356            |          |         |         |                     |                     |
	| start   | -p newest-cni-20220629121133-24356 --memory=2200           | minikube | jenkins | v1.26.0 | 29 Jun 22 12:11 PDT | 29 Jun 22 12:12 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |          |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |          |         |         |                     |                     |
	| stop    | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |          |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |          |         |         |                     |                     |
	| start   | -p newest-cni-20220629121133-24356 --memory=2200           | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |          |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |          |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |          |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.2              |          |         |         |                     |                     |
	| ssh     | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |          |         |         |                     |                     |
	| pause   | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:12 PDT | 29 Jun 22 12:12 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	| unpause | -p                                                         | minikube | jenkins | v1.26.0 | 29 Jun 22 12:13 PDT | 29 Jun 22 12:13 PDT |
	|         | newest-cni-20220629121133-24356                            |          |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |          |         |         |                     |                     |
	|---------|------------------------------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 12:12:29
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 12:12:29.588569   41733 out.go:296] Setting OutFile to fd 1 ...
	I0629 12:12:29.588742   41733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:12:29.588747   41733 out.go:309] Setting ErrFile to fd 2...
	I0629 12:12:29.588751   41733 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 12:12:29.589081   41733 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 12:12:29.589351   41733 out.go:303] Setting JSON to false
	I0629 12:12:29.604054   41733 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11517,"bootTime":1656518432,"procs":373,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 12:12:29.604211   41733 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 12:12:29.626180   41733 out.go:177] * [newest-cni-20220629121133-24356] minikube v1.26.0 on Darwin 12.4
	I0629 12:12:29.668306   41733 notify.go:193] Checking for updates...
	I0629 12:12:29.689036   41733 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 12:12:29.731359   41733 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:29.752253   41733 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 12:12:29.773342   41733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 12:12:29.794519   41733 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 12:12:29.817018   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:29.817692   41733 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 12:12:29.888455   41733 docker.go:137] docker version: linux-20.10.16
	I0629 12:12:29.888591   41733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:12:30.011986   41733 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:12:29.950877572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:12:30.054549   41733 out.go:177] * Using the docker driver based on existing profile
	I0629 12:12:30.075406   41733 start.go:284] selected driver: docker
	I0629 12:12:30.075423   41733 start.go:808] validating driver "docker" against &{Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:30.075522   41733 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 12:12:30.078607   41733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 12:12:30.200514   41733 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 19:12:30.14084278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 12:12:30.200716   41733 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0629 12:12:30.200733   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:30.200742   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:30.200751   41733 start_flags.go:310] config:
	{Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:30.222917   41733 out.go:177] * Starting control plane node newest-cni-20220629121133-24356 in cluster newest-cni-20220629121133-24356
	I0629 12:12:30.244330   41733 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 12:12:30.265398   41733 out.go:177] * Pulling base image ...
	I0629 12:12:30.308582   41733 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:12:30.308633   41733 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 12:12:30.308662   41733 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 12:12:30.308690   41733 cache.go:57] Caching tarball of preloaded images
	I0629 12:12:30.308864   41733 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0629 12:12:30.308882   41733 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.2 on docker
	I0629 12:12:30.309747   41733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/config.json ...
	I0629 12:12:30.374617   41733 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
	I0629 12:12:30.374655   41733 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
	I0629 12:12:30.374668   41733 cache.go:208] Successfully downloaded all kic artifacts
	I0629 12:12:30.374734   41733 start.go:352] acquiring machines lock for newest-cni-20220629121133-24356: {Name:mk042a3b5f3c7fb19f5a27cdd0c4d3bdf872dc19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 12:12:30.374833   41733 start.go:356] acquired machines lock for "newest-cni-20220629121133-24356" in 79.691µs
	I0629 12:12:30.374856   41733 start.go:94] Skipping create...Using existing machine configuration
	I0629 12:12:30.374862   41733 fix.go:55] fixHost starting: 
	I0629 12:12:30.375085   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:30.442031   41733 fix.go:103] recreateIfNeeded on newest-cni-20220629121133-24356: state=Stopped err=<nil>
	W0629 12:12:30.442065   41733 fix.go:129] unexpected machine state, will restart: <nil>
	I0629 12:12:30.464074   41733 out.go:177] * Restarting existing docker container for "newest-cni-20220629121133-24356" ...
	I0629 12:12:30.486024   41733 cli_runner.go:164] Run: docker start newest-cni-20220629121133-24356
	I0629 12:12:30.850374   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:30.924181   41733 kic.go:416] container "newest-cni-20220629121133-24356" state is running.
	I0629 12:12:30.925115   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:31.006727   41733 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/config.json ...
	I0629 12:12:31.007237   41733 machine.go:88] provisioning docker machine ...
	I0629 12:12:31.007269   41733 ubuntu.go:169] provisioning hostname "newest-cni-20220629121133-24356"
	I0629 12:12:31.007380   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.083305   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.083491   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.083504   41733 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220629121133-24356 && echo "newest-cni-20220629121133-24356" | sudo tee /etc/hostname
	I0629 12:12:31.211242   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220629121133-24356
	
	I0629 12:12:31.211315   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.286171   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.286391   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.286414   41733 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220629121133-24356' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220629121133-24356/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220629121133-24356' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0629 12:12:31.404993   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:12:31.405015   41733 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube}
	I0629 12:12:31.405050   41733 ubuntu.go:177] setting up certificates
	I0629 12:12:31.405062   41733 provision.go:83] configureAuth start
	I0629 12:12:31.405134   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:31.479685   41733 provision.go:138] copyHostCerts
	I0629 12:12:31.479785   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem, removing ...
	I0629 12:12:31.479795   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem
	I0629 12:12:31.479881   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.pem (1082 bytes)
	I0629 12:12:31.480083   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem, removing ...
	I0629 12:12:31.480095   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem
	I0629 12:12:31.480153   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cert.pem (1123 bytes)
	I0629 12:12:31.480301   41733 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem, removing ...
	I0629 12:12:31.480307   41733 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem
	I0629 12:12:31.480382   41733 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/key.pem (1675 bytes)
	I0629 12:12:31.480500   41733 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220629121133-24356 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220629121133-24356]
	I0629 12:12:31.553993   41733 provision.go:172] copyRemoteCerts
	I0629 12:12:31.554070   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0629 12:12:31.554128   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.632422   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:31.719010   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0629 12:12:31.736812   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0629 12:12:31.754703   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0629 12:12:31.775146   41733 provision.go:86] duration metric: configureAuth took 370.060143ms
	I0629 12:12:31.775160   41733 ubuntu.go:193] setting minikube options for container-runtime
	I0629 12:12:31.775316   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:31.775378   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:31.847694   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:31.847864   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:31.847875   41733 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0629 12:12:31.967172   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0629 12:12:31.967183   41733 ubuntu.go:71] root file system type: overlay
	I0629 12:12:31.967317   41733 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0629 12:12:31.967387   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.037988   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:32.038135   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:32.038189   41733 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0629 12:12:32.167065   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0629 12:12:32.167155   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.238743   41733 main.go:134] libmachine: Using SSH client type: native
	I0629 12:12:32.238893   41733 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 127.0.0.1 62539 <nil> <nil>}
	I0629 12:12:32.238905   41733 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0629 12:12:32.360199   41733 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0629 12:12:32.360216   41733 machine.go:91] provisioned docker machine in 1.352928421s
	I0629 12:12:32.360226   41733 start.go:306] post-start starting for "newest-cni-20220629121133-24356" (driver="docker")
	I0629 12:12:32.360231   41733 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0629 12:12:32.360309   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0629 12:12:32.360361   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.431487   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.517761   41733 ssh_runner.go:195] Run: cat /etc/os-release
	I0629 12:12:32.521520   41733 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0629 12:12:32.521537   41733 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0629 12:12:32.521543   41733 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0629 12:12:32.521548   41733 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0629 12:12:32.521559   41733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/addons for local assets ...
	I0629 12:12:32.521666   41733 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files for local assets ...
	I0629 12:12:32.521801   41733 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem -> 243562.pem in /etc/ssl/certs
	I0629 12:12:32.521971   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0629 12:12:32.529745   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:12:32.546093   41733 start.go:309] post-start completed in 185.852538ms
	I0629 12:12:32.546163   41733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 12:12:32.546210   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.617116   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.700718   41733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0629 12:12:32.705139   41733 fix.go:57] fixHost completed within 2.33019891s
	I0629 12:12:32.705152   41733 start.go:81] releasing machines lock for "newest-cni-20220629121133-24356", held for 2.330240179s
	I0629 12:12:32.705224   41733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220629121133-24356
	I0629 12:12:32.776217   41733 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0629 12:12:32.776227   41733 ssh_runner.go:195] Run: systemctl --version
	I0629 12:12:32.776278   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.776310   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:32.852787   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:32.854483   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:33.421714   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0629 12:12:33.429145   41733 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0629 12:12:33.441573   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:33.505252   41733 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0629 12:12:33.580689   41733 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0629 12:12:33.591697   41733 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0629 12:12:33.591757   41733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0629 12:12:33.601297   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0629 12:12:33.613993   41733 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0629 12:12:33.679329   41733 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0629 12:12:33.744434   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:33.812377   41733 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0629 12:12:34.075341   41733 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0629 12:12:34.147333   41733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0629 12:12:34.213850   41733 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0629 12:12:34.223490   41733 start.go:447] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0629 12:12:34.223554   41733 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0629 12:12:34.227483   41733 start.go:468] Will wait 60s for crictl version
	I0629 12:12:34.227524   41733 ssh_runner.go:195] Run: sudo crictl version
	I0629 12:12:34.255687   41733 start.go:477] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0629 12:12:34.255756   41733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:12:34.290892   41733 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0629 12:12:34.367971   41733 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
	I0629 12:12:34.368104   41733 cli_runner.go:164] Run: docker exec -t newest-cni-20220629121133-24356 dig +short host.docker.internal
	I0629 12:12:34.494783   41733 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0629 12:12:34.494880   41733 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0629 12:12:34.499195   41733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:12:34.508835   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:34.602759   41733 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0629 12:12:34.623818   41733 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 12:12:34.623948   41733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:12:34.654471   41733 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 12:12:34.654492   41733 docker.go:533] Images already preloaded, skipping extraction
	I0629 12:12:34.654556   41733 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0629 12:12:34.685516   41733 docker.go:602] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.2
	k8s.gcr.io/kube-controller-manager:v1.24.2
	k8s.gcr.io/kube-scheduler:v1.24.2
	k8s.gcr.io/kube-proxy:v1.24.2
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0629 12:12:34.685540   41733 cache_images.go:84] Images are preloaded, skipping loading
	I0629 12:12:34.685619   41733 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0629 12:12:34.759279   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:34.759290   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:34.759307   41733 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0629 12:12:34.759324   41733 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220629121133-24356 NodeName:newest-cni-20220629121133-24356 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0629 12:12:34.759449   41733 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220629121133-24356"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0629 12:12:34.759532   41733 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220629121133-24356 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0629 12:12:34.759600   41733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
	I0629 12:12:34.767268   41733 binaries.go:44] Found k8s binaries, skipping transfer
	I0629 12:12:34.767320   41733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0629 12:12:34.774536   41733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0629 12:12:34.787443   41733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0629 12:12:34.799855   41733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0629 12:12:34.812169   41733 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0629 12:12:34.815908   41733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0629 12:12:34.825528   41733 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356 for IP: 192.168.67.2
	I0629 12:12:34.825648   41733 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key
	I0629 12:12:34.825704   41733 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key
	I0629 12:12:34.825782   41733 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/client.key
	I0629 12:12:34.825849   41733 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.key.c7fa3a9e
	I0629 12:12:34.825919   41733 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.key
	I0629 12:12:34.826130   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem (1338 bytes)
	W0629 12:12:34.826169   41733 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356_empty.pem, impossibly tiny 0 bytes
	I0629 12:12:34.826180   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca-key.pem (1679 bytes)
	I0629 12:12:34.826212   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/ca.pem (1082 bytes)
	I0629 12:12:34.826244   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/cert.pem (1123 bytes)
	I0629 12:12:34.826274   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/key.pem (1675 bytes)
	I0629 12:12:34.826337   41733 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem (1708 bytes)
	I0629 12:12:34.826873   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0629 12:12:34.843557   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0629 12:12:34.860588   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0629 12:12:34.877409   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/newest-cni-20220629121133-24356/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0629 12:12:34.893984   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0629 12:12:34.910737   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0629 12:12:34.927624   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0629 12:12:34.944443   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0629 12:12:34.961512   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/certs/24356.pem --> /usr/share/ca-certificates/24356.pem (1338 bytes)
	I0629 12:12:34.978266   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/ssl/certs/243562.pem --> /usr/share/ca-certificates/243562.pem (1708 bytes)
	I0629 12:12:34.995472   41733 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0629 12:12:35.012505   41733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0629 12:12:35.024964   41733 ssh_runner.go:195] Run: openssl version
	I0629 12:12:35.030215   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/243562.pem && ln -fs /usr/share/ca-certificates/243562.pem /etc/ssl/certs/243562.pem"
	I0629 12:12:35.038129   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.042019   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun 29 17:58 /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.042061   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/243562.pem
	I0629 12:12:35.047267   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/243562.pem /etc/ssl/certs/3ec20f2e.0"
	I0629 12:12:35.054538   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0629 12:12:35.062220   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.066203   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun 29 17:54 /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.066240   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0629 12:12:35.071307   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0629 12:12:35.078467   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24356.pem && ln -fs /usr/share/ca-certificates/24356.pem /etc/ssl/certs/24356.pem"
	I0629 12:12:35.086274   41733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.090276   41733 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun 29 17:58 /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.090313   41733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24356.pem
	I0629 12:12:35.095533   41733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24356.pem /etc/ssl/certs/51391683.0"
	I0629 12:12:35.102606   41733 kubeadm.go:395] StartCluster: {Name:newest-cni-20220629121133-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:newest-cni-20220629121133-24356 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 12:12:35.102713   41733 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:12:35.132448   41733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0629 12:12:35.140266   41733 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0629 12:12:35.140281   41733 kubeadm.go:626] restartCluster start
	I0629 12:12:35.140327   41733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0629 12:12:35.146994   41733 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.147056   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:35.219650   41733 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220629121133-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:35.219829   41733 kubeconfig.go:127] "newest-cni-20220629121133-24356" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig - will repair!
	I0629 12:12:35.220162   41733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:35.221494   41733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0629 12:12:35.229173   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.229229   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.237398   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.438653   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.438806   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.449653   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.638446   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.638653   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.649603   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:35.839288   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:35.839468   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:35.850410   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.038612   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.038695   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.048631   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.238683   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.238815   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.249963   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.438649   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.438836   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.450067   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.638692   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.638870   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.649564   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:36.838638   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:36.838714   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:36.847331   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.038701   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.038777   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.049187   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.238747   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.238937   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.249608   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.438729   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.438903   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.449567   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.639628   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.639781   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.650435   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:37.838708   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:37.838812   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:37.849567   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.038733   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.038840   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.049254   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.239139   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.239235   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.250125   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.250135   41733 api_server.go:165] Checking apiserver status ...
	I0629 12:12:38.250179   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0629 12:12:38.258469   41733 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.258482   41733 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0629 12:12:38.258492   41733 kubeadm.go:1092] stopping kube-system containers ...
	I0629 12:12:38.258551   41733 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0629 12:12:38.289744   41733 docker.go:434] Stopping containers: [b9102467e462 b7ac5a073ab7 1aaad07a6a07 137a44de5e43 995d90c1cfbe 2da50998e266 2c49cd15cdb0 bd178c2d55c0 67eaf5abb356 c6cdb8f06829 c6b7f1c8b2e0 154ec38f5f06 24248b5ec744 3ee0db0d474b 5270423c28e0 fcf2cbbeac73]
	I0629 12:12:38.289817   41733 ssh_runner.go:195] Run: docker stop b9102467e462 b7ac5a073ab7 1aaad07a6a07 137a44de5e43 995d90c1cfbe 2da50998e266 2c49cd15cdb0 bd178c2d55c0 67eaf5abb356 c6cdb8f06829 c6b7f1c8b2e0 154ec38f5f06 24248b5ec744 3ee0db0d474b 5270423c28e0 fcf2cbbeac73
	I0629 12:12:38.320147   41733 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0629 12:12:38.330428   41733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0629 12:12:38.340507   41733 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun 29 19:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jun 29 19:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun 29 19:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun 29 19:11 /etc/kubernetes/scheduler.conf
	
	I0629 12:12:38.340589   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0629 12:12:38.350519   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0629 12:12:38.357684   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0629 12:12:38.364728   41733 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.364780   41733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0629 12:12:38.371710   41733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0629 12:12:38.379123   41733 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0629 12:12:38.379175   41733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0629 12:12:38.385993   41733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0629 12:12:38.393168   41733 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0629 12:12:38.393180   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:38.436431   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.589898   41733 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.153415676s)
	I0629 12:12:39.610848   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.777939   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.828297   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:39.882153   41733 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:12:39.882214   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.422672   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.921240   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:40.933995   41733 api_server.go:71] duration metric: took 1.05181252s to wait for apiserver process to appear ...
	I0629 12:12:40.934017   41733 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:12:40.934032   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:40.935225   41733 api_server.go:256] stopped: https://127.0.0.1:62538/healthz: Get "https://127.0.0.1:62538/healthz": EOF
	I0629 12:12:41.435446   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:44.555903   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0629 12:12:44.555920   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0629 12:12:44.935571   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:44.940934   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:12:44.940951   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:12:45.437041   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:45.444290   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0629 12:12:45.444302   41733 api_server.go:102] status: https://127.0.0.1:62538/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0629 12:12:45.935471   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:45.942308   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 200:
	ok
	I0629 12:12:45.952038   41733 api_server.go:140] control plane version: v1.24.2
	I0629 12:12:45.952054   41733 api_server.go:130] duration metric: took 5.017880972s to wait for apiserver health ...
	I0629 12:12:45.952061   41733 cni.go:95] Creating CNI manager for ""
	I0629 12:12:45.952067   41733 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 12:12:45.952076   41733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:12:45.960349   41733 system_pods.go:59] 9 kube-system pods found
	I0629 12:12:45.960372   41733 system_pods.go:61] "coredns-6d4b75cb6d-2gsk5" [c9d7132e-f877-48c6-9493-810c7fdcff0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:45.960384   41733 system_pods.go:61] "coredns-6d4b75cb6d-9wn52" [6cf87e39-b15c-47f7-a015-ff68ce065e5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:45.960388   41733 system_pods.go:61] "etcd-newest-cni-20220629121133-24356" [b398814e-e32a-4de4-88e5-978e1a2d51b7] Running
	I0629 12:12:45.960392   41733 system_pods.go:61] "kube-apiserver-newest-cni-20220629121133-24356" [31de6ac7-bbc5-4f4d-88df-09aea857ccb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 12:12:45.960398   41733 system_pods.go:61] "kube-controller-manager-newest-cni-20220629121133-24356" [b91952e0-8b84-4c7b-a40a-85bc6599941f] Running
	I0629 12:12:45.960403   41733 system_pods.go:61] "kube-proxy-tgvc5" [70f6241f-6d23-4a0d-9d6d-9a51140e9b8d] Running
	I0629 12:12:45.960407   41733 system_pods.go:61] "kube-scheduler-newest-cni-20220629121133-24356" [891e3e1d-be39-482c-872e-822aa00f8f5f] Running
	I0629 12:12:45.960414   41733 system_pods.go:61] "metrics-server-5c6f97fb75-44k7n" [df9e220a-c0e0-4006-860a-2d99b33b1144] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:12:45.960421   41733 system_pods.go:61] "storage-provisioner" [4b4463d8-1274-427c-b999-2b566e5081a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 12:12:45.960425   41733 system_pods.go:74] duration metric: took 8.344088ms to wait for pod list to return data ...
	I0629 12:12:45.960431   41733 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:12:45.964468   41733 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:12:45.964487   41733 node_conditions.go:123] node cpu capacity is 6
	I0629 12:12:45.964496   41733 node_conditions.go:105] duration metric: took 4.060805ms to run NodePressure ...
	I0629 12:12:45.964507   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0629 12:12:46.316106   41733 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0629 12:12:46.325031   41733 ops.go:34] apiserver oom_adj: -16
	I0629 12:12:46.325046   41733 kubeadm.go:630] restartCluster took 11.184421012s
	I0629 12:12:46.325056   41733 kubeadm.go:397] StartCluster complete in 11.222120608s
	I0629 12:12:46.325077   41733 settings.go:142] acquiring lock: {Name:mk8cd784535a926dd1b6955ad1b3a357865d16d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:46.325161   41733 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 12:12:46.325817   41733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig: {Name:mk20ebad566718388182fa7c9da1cb4ef6bd9ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 12:12:46.329466   41733 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220629121133-24356" rescaled to 1
	I0629 12:12:46.329511   41733 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0629 12:12:46.329537   41733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0629 12:12:46.329546   41733 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0629 12:12:46.374352   41733 out.go:177] * Verifying Kubernetes components...
	I0629 12:12:46.329609   41733 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329610   41733 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329643   41733 addons.go:65] Setting dashboard=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329674   41733 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220629121133-24356"
	I0629 12:12:46.329796   41733 config.go:178] Loaded profile config "newest-cni-20220629121133-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 12:12:46.395400   41733 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220629121133-24356"
	I0629 12:12:46.395401   41733 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220629121133-24356"
	I0629 12:12:46.395405   41733 addons.go:153] Setting addon dashboard=true in "newest-cni-20220629121133-24356"
	W0629 12:12:46.395449   41733 addons.go:162] addon dashboard should already be in state true
	W0629 12:12:46.395453   41733 addons.go:162] addon storage-provisioner should already be in state true
	I0629 12:12:46.395460   41733 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220629121133-24356"
	W0629 12:12:46.395493   41733 addons.go:162] addon metrics-server should already be in state true
	I0629 12:12:46.395511   41733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 12:12:46.395544   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395553   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395566   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.395879   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396688   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396736   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.396797   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.449507   41733 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0629 12:12:46.449527   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.551690   41733 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 12:12:46.522779   41733 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220629121133-24356"
	I0629 12:12:46.589626   41733 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:12:46.626450   41733 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0629 12:12:46.626471   41733 addons.go:162] addon default-storageclass should already be in state true
	I0629 12:12:46.663330   41733 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0629 12:12:46.663339   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0629 12:12:46.663381   41733 host.go:66] Checking if "newest-cni-20220629121133-24356" exists ...
	I0629 12:12:46.700511   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0629 12:12:46.700587   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.737559   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0629 12:12:46.775480   41733 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0629 12:12:46.737631   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.739078   41733 cli_runner.go:164] Run: docker container inspect newest-cni-20220629121133-24356 --format={{.State.Status}}
	I0629 12:12:46.791603   41733 api_server.go:51] waiting for apiserver process to appear ...
	I0629 12:12:46.796760   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0629 12:12:46.796774   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0629 12:12:46.796792   41733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 12:12:46.796852   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.816654   41733 api_server.go:71] duration metric: took 487.096402ms to wait for apiserver process to appear ...
	I0629 12:12:46.816710   41733 api_server.go:87] waiting for apiserver healthz status ...
	I0629 12:12:46.816733   41733 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:62538/healthz ...
	I0629 12:12:46.823384   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.826055   41733 api_server.go:266] https://127.0.0.1:62538/healthz returned 200:
	ok
	I0629 12:12:46.828540   41733 api_server.go:140] control plane version: v1.24.2
	I0629 12:12:46.828560   41733 api_server.go:130] duration metric: took 11.838984ms to wait for apiserver health ...
	I0629 12:12:46.828572   41733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0629 12:12:46.836929   41733 system_pods.go:59] 9 kube-system pods found
	I0629 12:12:46.836954   41733 system_pods.go:61] "coredns-6d4b75cb6d-2gsk5" [c9d7132e-f877-48c6-9493-810c7fdcff0c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:46.836967   41733 system_pods.go:61] "coredns-6d4b75cb6d-9wn52" [6cf87e39-b15c-47f7-a015-ff68ce065e5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0629 12:12:46.836979   41733 system_pods.go:61] "etcd-newest-cni-20220629121133-24356" [b398814e-e32a-4de4-88e5-978e1a2d51b7] Running
	I0629 12:12:46.836990   41733 system_pods.go:61] "kube-apiserver-newest-cni-20220629121133-24356" [31de6ac7-bbc5-4f4d-88df-09aea857ccb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0629 12:12:46.837006   41733 system_pods.go:61] "kube-controller-manager-newest-cni-20220629121133-24356" [b91952e0-8b84-4c7b-a40a-85bc6599941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0629 12:12:46.837015   41733 system_pods.go:61] "kube-proxy-tgvc5" [70f6241f-6d23-4a0d-9d6d-9a51140e9b8d] Running
	I0629 12:12:46.837022   41733 system_pods.go:61] "kube-scheduler-newest-cni-20220629121133-24356" [891e3e1d-be39-482c-872e-822aa00f8f5f] Running
	I0629 12:12:46.837029   41733 system_pods.go:61] "metrics-server-5c6f97fb75-44k7n" [df9e220a-c0e0-4006-860a-2d99b33b1144] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0629 12:12:46.837036   41733 system_pods.go:61] "storage-provisioner" [4b4463d8-1274-427c-b999-2b566e5081a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0629 12:12:46.837042   41733 system_pods.go:74] duration metric: took 8.464446ms to wait for pod list to return data ...
	I0629 12:12:46.837051   41733 default_sa.go:34] waiting for default service account to be created ...
	I0629 12:12:46.840230   41733 default_sa.go:45] found service account: "default"
	I0629 12:12:46.840247   41733 default_sa.go:55] duration metric: took 3.190141ms for default service account to be created ...
	I0629 12:12:46.840258   41733 kubeadm.go:572] duration metric: took 510.708763ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0629 12:12:46.840271   41733 node_conditions.go:102] verifying NodePressure condition ...
	I0629 12:12:46.844218   41733 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0629 12:12:46.844236   41733 node_conditions.go:123] node cpu capacity is 6
	I0629 12:12:46.844244   41733 node_conditions.go:105] duration metric: took 3.970296ms to run NodePressure ...
	I0629 12:12:46.844255   41733 start.go:213] waiting for startup goroutines ...
	I0629 12:12:46.873003   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.876793   41733 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0629 12:12:46.876815   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0629 12:12:46.876899   41733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220629121133-24356
	I0629 12:12:46.896227   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.940210   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0629 12:12:46.962962   41733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62539 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/newest-cni-20220629121133-24356/id_rsa Username:docker}
	I0629 12:12:46.973206   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0629 12:12:46.973219   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0629 12:12:46.987916   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0629 12:12:46.987927   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0629 12:12:47.020597   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0629 12:12:47.020612   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0629 12:12:47.021888   41733 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:12:47.021898   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0629 12:12:47.035048   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0629 12:12:47.035063   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0629 12:12:47.039533   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0629 12:12:47.052647   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0629 12:12:47.052659   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0629 12:12:47.116954   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0629 12:12:47.116967   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0629 12:12:47.126958   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0629 12:12:47.134818   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0629 12:12:47.134831   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0629 12:12:47.230278   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0629 12:12:47.230295   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0629 12:12:47.247331   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0629 12:12:47.247345   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0629 12:12:47.314734   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0629 12:12:47.314759   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0629 12:12:47.331421   41733 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:12:47.331437   41733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0629 12:12:47.348713   41733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0629 12:12:48.031150   41733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090881748s)
	I0629 12:12:48.110600   41733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071010288s)
	I0629 12:12:48.110630   41733 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220629121133-24356"
	I0629 12:12:48.266026   41733 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0629 12:12:48.325441   41733 addons.go:414] enableAddons completed in 1.995803691s
	I0629 12:12:48.356437   41733 start.go:506] kubectl: 1.24.0, cluster: 1.24.2 (minor skew: 0)
	I0629 12:12:48.377748   41733 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220629121133-24356" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-06-29 19:12:30 UTC, end at Wed 2022-06-29 19:13:33 UTC. --
	Jun 29 19:12:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:33.891664648Z" level=info msg="Loading containers: start."
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.021476794Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.054020458Z" level=info msg="Loading containers: done."
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.062415310Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.062479308Z" level=info msg="Daemon has completed initialization"
	Jun 29 19:12:34 newest-cni-20220629121133-24356 systemd[1]: Started Docker Application Container Engine.
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.083457179Z" level=info msg="API listen on [::]:2376"
	Jun 29 19:12:34 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:34.088815692Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 29 19:12:46 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:46.474630239Z" level=info msg="ignoring event" container=c6b19ee41ee86496990821bd74a72b1f2eee626fc5d374de8ddcbacec95d8d4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:46 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:46.919020207Z" level=info msg="ignoring event" container=e27793db30c44a3a50a98b2792ae37f2b128af7c03958138e33c97bcea35830b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.155290383Z" level=info msg="ignoring event" container=6df3221e3639de879a8686b8664bf7c5151ba1754f2d1f300e90e09af4b7e69c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.229656147Z" level=info msg="ignoring event" container=b810ddcb5a2c08a114340919625275f43b1bdb996b2dbfbcf11a7ca744fe232d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.876144566Z" level=info msg="ignoring event" container=36f3e6dc6be6575825dea9339a85533cf95ab582f3baa4e662404b8a4e10ec2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:48 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:48.884501978Z" level=info msg="ignoring event" container=ae5600c0854f5f2576b66cfb6cfe69688448c762a4997d8cd40fcdd515018ca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:49 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:49.927468796Z" level=info msg="ignoring event" container=c0c456ff668cd9b48cec7b0a5990c8cabe875983db4ae12d648d896f95e34114 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:12:49 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:12:49.927680592Z" level=info msg="ignoring event" container=4a708cf64ff1701f2da2e8aa1540beec46b1eb0d3b15a0495bfe804b38b79ca1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:27 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:27.829592279Z" level=info msg="ignoring event" container=892a319a1f3c58c4c74fc0e894fd5e61f1e7a582696f753649457b122d441350 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:28 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:28.097220647Z" level=info msg="ignoring event" container=0865ddd9a859cf5896a2f6148ef18cc2f07090e21934b2b498828caad1d4fbee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:28 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:28.717064030Z" level=info msg="ignoring event" container=a9a8d5c657b4640d3087317d769a0afd83c36d6d6e5fa2e9bf1441467e9d7e20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:29 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:29.721205801Z" level=info msg="ignoring event" container=0d42d66fba440891443ffe28c49299489c858f687cb40aeed650f06ea8072b28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:29 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:29.736881751Z" level=info msg="ignoring event" container=90c84d0251de9db3968111467e545e2ba52ef4a71913ce566a3b0497bd20051b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:31 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:31.788905771Z" level=info msg="ignoring event" container=c2f9cdfdd412b971ca8a619a0b0a31231e11c93d753b54b459cb245c7c0535d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:31 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:31.856920891Z" level=info msg="ignoring event" container=d4c299cfd929d94a7818a339a39c1a3b77dbd622cae69d5921b51cfbb31d8cbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:33.318569474Z" level=info msg="ignoring event" container=0182e7e9bc886d4975a8347f4b23663b431ac686eb4e9331c9843b450fc0a61f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 29 19:13:33 newest-cni-20220629121133-24356 dockerd[606]: time="2022-06-29T19:13:33.325626572Z" level=info msg="ignoring event" container=c97d970f2bf85b6b9b39a4e980a11296e099737b179c328d81affb7271cc6787 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	832875ac54550       6e38f40d628db       47 seconds ago       Running             storage-provisioner       1                   dfd4bdbabf56a
	5ceda341afbb9       a634548d10b03       48 seconds ago       Running             kube-proxy                1                   49c54f147a8f0
	cae530751925f       aebe758cef4cd       53 seconds ago       Running             etcd                      1                   c9f793b29f3c4
	cf60daa2910e8       34cdf99b1bb3b       53 seconds ago       Running             kube-controller-manager   1                   83b10ece13d15
	e6b78ff80d34b       d3377ffb7177c       53 seconds ago       Running             kube-apiserver            1                   6f6d81d2f7f11
	4d7ec14e3d562       5d725196c1f47       53 seconds ago       Running             kube-scheduler            1                   2323fcceb950d
	b9102467e4628       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   1aaad07a6a073
	995d90c1cfbed       a634548d10b03       About a minute ago   Exited              kube-proxy                0                   bd178c2d55c0a
	67eaf5abb3561       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   c6cdb8f068299
	c6b7f1c8b2e0f       34cdf99b1bb3b       About a minute ago   Exited              kube-controller-manager   0                   154ec38f5f06b
	24248b5ec7441       d3377ffb7177c       About a minute ago   Exited              kube-apiserver            0                   fcf2cbbeac73f
	3ee0db0d474bc       5d725196c1f47       About a minute ago   Exited              kube-scheduler            0                   5270423c28e0c
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220629121133-24356
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220629121133-24356
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=80ef72c6e06144133907f90b1b2924df52b551ed
	                    minikube.k8s.io/name=newest-cni-20220629121133-24356
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_29T12_12_01_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 29 Jun 2022 19:11:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220629121133-24356
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 29 Jun 2022 19:13:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 29 Jun 2022 19:13:23 +0000   Wed, 29 Jun 2022 19:13:23 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-20220629121133-24356
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbe1e1cef6e940328962dca52b3c5731
	  System UUID:                46aaca5c-da45-4fce-b49b-973f0583fbb1
	  Boot ID:                    fadc233d-8cf8-4f28-b4a1-fb218440cdcd
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.2
	  Kube-Proxy Version:         v1.24.2
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2gsk5                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     80s
	  kube-system                 etcd-newest-cni-20220629121133-24356                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-newest-cni-20220629121133-24356             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-newest-cni-20220629121133-24356    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-tgvc5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-scheduler-newest-cni-20220629121133-24356             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 metrics-server-5c6f97fb75-44k7n                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 78s                kube-proxy       
	  Normal  NodeHasSufficientPID     93s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                93s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeReady
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           81s                node-controller  Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller
	  Normal  NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x4 over 55s)  kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     54s (x3 over 55s)  kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    54s (x4 over 55s)  kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11s                node-controller  Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s                kubelet          Node newest-cni-20220629121133-24356 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                0s                 kubelet          Node newest-cni-20220629121133-24356 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [67eaf5abb356] <==
	* {"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:11:56.459Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220629121133-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:11:56.460Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:11:56.461Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-06-29T19:12:17.155Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-29T19:12:17.155Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220629121133-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/06/29 19:12:17 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/29 19:12:17 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-29T19:12:17.166Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-06-29T19:12:17.168Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:17.170Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:17.170Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220629121133-24356","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [cae530751925] <==
	* {"level":"info","ts":"2022-06-29T19:12:40.957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-06-29T19:12:40.958Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-06-29T19:12:40.958Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-29T19:12:40.961Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-29T19:12:40.962Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:40.962Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220629121133-24356 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-29T19:12:42.853Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-29T19:12:42.854Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-29T19:12:42.854Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-06-29T19:13:27.608Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"148.80459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-tgvc5\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2022-06-29T19:13:27.608Z","caller":"traceutil/trace.go:171","msg":"trace[210316011] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-tgvc5; range_end:; response_count:1; response_revision:510; }","duration":"148.897501ms","start":"2022-06-29T19:13:27.459Z","end":"2022-06-29T19:13:27.608Z","steps":["trace[210316011] 'agreement among raft nodes before linearized reading'  (duration: 73.544644ms)","trace[210316011] 'range keys from in-memory index tree'  (duration: 75.224468ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  19:13:34 up  1:21,  0 users,  load average: 1.29, 1.14, 1.21
	Linux newest-cni-20220629121133-24356 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [24248b5ec744] <==
	* W0629 19:12:18.160266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160028       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160284       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160289       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160289       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.159820       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160311       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160318       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160317       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160333       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160367       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160346       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160348       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160382       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160365       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160394       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160408       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160412       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160429       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160395       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160471       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160492       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160493       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160508       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0629 19:12:18.160549       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [e6b78ff80d34] <==
	* I0629 19:12:44.664307       1 cache.go:39] Caches are synced for autoregister controller
	I0629 19:12:44.670266       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0629 19:12:44.673976       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0629 19:12:45.328063       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0629 19:12:45.555212       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0629 19:12:45.672553       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:12:45.672591       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0629 19:12:45.672631       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0629 19:12:45.672564       1 handler_proxy.go:102] no RequestInfo found in the context
	E0629 19:12:45.672667       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0629 19:12:45.673726       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0629 19:12:45.837810       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0629 19:12:46.156993       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0629 19:12:46.173631       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0629 19:12:46.253297       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0629 19:12:46.266053       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0629 19:12:46.272892       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0629 19:12:48.048827       1 controller.go:611] quota admission added evaluator for: namespaces
	I0629 19:12:48.186656       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.96.123.255]
	I0629 19:12:48.234612       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.35.178]
	I0629 19:13:23.418539       1 controller.go:611] quota admission added evaluator for: endpoints
	I0629 19:13:24.163933       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:13:24.163934       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0629 19:13:24.216693       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [c6b7f1c8b2e0] <==
	* I0629 19:12:13.183226       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0629 19:12:13.185685       1 shared_informer.go:262] Caches are synced for taint
	I0629 19:12:13.185757       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0629 19:12:13.185795       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220629121133-24356. Assuming now as a timestamp.
	I0629 19:12:13.185829       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0629 19:12:13.185885       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0629 19:12:13.185965       1 event.go:294] "Event occurred" object="newest-cni-20220629121133-24356" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller"
	I0629 19:12:13.187019       1 range_allocator.go:374] Set node newest-cni-20220629121133-24356 PodCIDR to [192.168.0.0/24]
	I0629 19:12:13.198358       1 shared_informer.go:262] Caches are synced for attach detach
	I0629 19:12:13.204745       1 shared_informer.go:262] Caches are synced for HPA
	I0629 19:12:13.329884       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0629 19:12:13.332668       1 shared_informer.go:262] Caches are synced for cronjob
	I0629 19:12:13.382573       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:12:13.386095       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:12:13.680656       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0629 19:12:13.690185       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0629 19:12:13.795328       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:12:13.878417       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:12:13.878436       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0629 19:12:13.883509       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tgvc5"
	I0629 19:12:14.181498       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-9wn52"
	I0629 19:12:14.264961       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2gsk5"
	I0629 19:12:14.286781       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-9wn52"
	I0629 19:12:16.407221       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0629 19:12:16.411414       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-44k7n"
	
	* 
	* ==> kube-controller-manager [cf60daa2910e] <==
	* I0629 19:13:23.945427       1 shared_informer.go:262] Caches are synced for GC
	I0629 19:13:23.945427       1 shared_informer.go:262] Caches are synced for daemon sets
	I0629 19:13:23.947762       1 shared_informer.go:262] Caches are synced for taint
	I0629 19:13:23.947814       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0629 19:13:23.947924       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0629 19:13:23.947985       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220629121133-24356. Assuming now as a timestamp.
	I0629 19:13:23.948013       1 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0629 19:13:23.948041       1 event.go:294] "Event occurred" object="newest-cni-20220629121133-24356" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220629121133-24356 event: Registered Node newest-cni-20220629121133-24356 in Controller"
	I0629 19:13:23.957735       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0629 19:13:23.960044       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0629 19:13:23.962424       1 shared_informer.go:262] Caches are synced for stateful set
	I0629 19:13:24.005429       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:13:24.005659       1 shared_informer.go:262] Caches are synced for cronjob
	I0629 19:13:24.011444       1 shared_informer.go:262] Caches are synced for disruption
	I0629 19:13:24.011458       1 disruption.go:371] Sending events to api server.
	I0629 19:13:24.011912       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0629 19:13:24.012766       1 shared_informer.go:262] Caches are synced for resource quota
	I0629 19:13:24.013739       1 shared_informer.go:262] Caches are synced for ephemeral
	I0629 19:13:24.167272       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0629 19:13:24.168548       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0629 19:13:24.316838       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-vd4rr"
	I0629 19:13:24.319553       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-2jh4t"
	I0629 19:13:24.440079       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:13:24.510086       1 shared_informer.go:262] Caches are synced for garbage collector
	I0629 19:13:24.510175       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [5ceda341afbb] <==
	* I0629 19:12:45.815065       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:12:45.815130       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:12:45.815151       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:12:45.833861       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:12:45.834551       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:12:45.834623       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:12:45.834702       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:12:45.834795       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:45.835098       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:45.835671       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:12:45.835770       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:12:45.836545       1 config.go:444] "Starting node config controller"
	I0629 19:12:45.836584       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:12:45.836802       1 config.go:317] "Starting service config controller"
	I0629 19:12:45.836857       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:12:45.840488       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:12:45.840516       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:12:45.840527       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 19:12:45.937671       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:12:45.937817       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [995d90c1cfbe] <==
	* I0629 19:12:15.074508       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0629 19:12:15.074581       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0629 19:12:15.074601       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0629 19:12:15.157156       1 server_others.go:206] "Using iptables Proxier"
	I0629 19:12:15.157228       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0629 19:12:15.157237       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0629 19:12:15.157248       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0629 19:12:15.157276       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:15.157374       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0629 19:12:15.157506       1 server.go:661] "Version info" version="v1.24.2"
	I0629 19:12:15.157512       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:12:15.158049       1 config.go:317] "Starting service config controller"
	I0629 19:12:15.158103       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0629 19:12:15.158110       1 config.go:226] "Starting endpoint slice config controller"
	I0629 19:12:15.158120       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0629 19:12:15.158589       1 config.go:444] "Starting node config controller"
	I0629 19:12:15.158613       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0629 19:12:15.258230       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0629 19:12:15.258258       1 shared_informer.go:262] Caches are synced for service config
	I0629 19:12:15.258717       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3ee0db0d474b] <==
	* E0629 19:11:58.258207       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0629 19:11:58.258307       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0629 19:11:58.258339       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0629 19:11:58.258392       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0629 19:11:58.258423       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0629 19:11:58.258813       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0629 19:11:58.258847       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0629 19:11:59.093530       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0629 19:11:59.093568       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0629 19:11:59.126003       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0629 19:11:59.126077       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0629 19:11:59.126003       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0629 19:11:59.126114       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0629 19:11:59.188925       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0629 19:11:59.188964       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0629 19:11:59.249913       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0629 19:11:59.249953       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0629 19:11:59.348382       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0629 19:11:59.348435       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 19:11:59.372515       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0629 19:11:59.372553       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0629 19:12:02.353684       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 19:12:17.149621       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0629 19:12:17.151223       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0629 19:12:17.151399       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [4d7ec14e3d56] <==
	* W0629 19:12:40.973007       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0629 19:12:42.010252       1 serving.go:348] Generated self-signed cert in-memory
	W0629 19:12:44.564768       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0629 19:12:44.564804       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0629 19:12:44.564811       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0629 19:12:44.564816       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0629 19:12:44.629627       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.2"
	I0629 19:12:44.629661       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0629 19:12:44.630991       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0629 19:12:44.631065       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0629 19:12:44.631038       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0629 19:12:44.632554       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0629 19:12:44.732160       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-06-29 19:12:30 UTC, end at Wed 2022-06-29 19:13:36 UTC. --
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vd4rr"
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:35.227200    3985 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-dffd48c4c-vd4rr_kubernetes-dashboard(519f3928-a3b3-4601-b3f1-4cc5bf630e30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-dffd48c4c-vd4rr_kubernetes-dashboard(519f3928-a3b3-4601-b3f1-4cc5bf630e30)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"d1f1faf1b389db1afd3524ba7fa1fc2ba5eef22220a0195438c7cae06dd535b9\\\" network for pod \\\"dashboard-metrics-scraper-dffd48c4c-vd4rr\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-dffd48c4c-vd4rr_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"d1f1faf1b389db1afd3524ba7fa1fc2ba5eef22220a0195438c7cae06dd535b9\\\" network for pod \\\"dashboar
d-metrics-scraper-dffd48c4c-vd4rr\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-dffd48c4c-vd4rr_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.26 -j CNI-12585b1658216c8d6f413247 -m comment --comment name: \\\"crio\\\" id: \\\"d1f1faf1b389db1afd3524ba7fa1fc2ba5eef22220a0195438c7cae06dd535b9\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-12585b1658216c8d6f413247':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vd4rr" podUID=519f3928-a3b3-4601-b3f1-4cc5bf630e30
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]: I0629 19:13:35.230439    3985 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d1f1faf1b389db1afd3524ba7fa1fc2ba5eef22220a0195438c7cae06dd535b9"
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]: I0629 19:13:35.239765    3985 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f437ba325a70c1abf2d2d7af329c2ad09111e77944591cdfd135fdc495f039b4"
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:35.862335    3985 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" network for pod "coredns-6d4b75cb6d-2gsk5": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-2gsk5_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" network for pod "coredns-6d4b75cb6d-2gsk5": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-2gsk5_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.27 -j CNI-98e1965eb390c52cc53cc1cc -m comment --comment name: "crio" id: "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-98e1965eb390c52cc53cc1cc':No such file or directory
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         ]
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:  >
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:35.862458    3985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" network for pod "coredns-6d4b75cb6d-2gsk5": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-2gsk5_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" network for pod "coredns-6d4b75cb6d-2gsk5": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-2gsk5_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.27 -j CNI-98e1965eb390c52cc53cc1cc -m comment --comment name: "crio" id: "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-98e1965eb390c52cc53cc1cc':No such file or directory
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         ]
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:  > pod="kube-system/coredns-6d4b75cb6d-2gsk5"
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:35.862481    3985 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         rpc error: code = Unknown desc = [failed to set up sandbox container "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" network for pod "coredns-6d4b75cb6d-2gsk5": networkPlugin cni failed to set up pod "coredns-6d4b75cb6d-2gsk5_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" network for pod "coredns-6d4b75cb6d-2gsk5": networkPlugin cni failed to teardown pod "coredns-6d4b75cb6d-2gsk5_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.27 -j CNI-98e1965eb390c52cc53cc1cc -m comment --comment name: "crio" id: "c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-98e1965eb390c52cc53cc1cc':No such file or directory
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         Try `iptables -h' or 'iptables --help' for more information.
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:         ]
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]:  > pod="kube-system/coredns-6d4b75cb6d-2gsk5"
	Jun 29 19:13:35 newest-cni-20220629121133-24356 kubelet[3985]: E0629 19:13:35.862554    3985 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-2gsk5_kube-system(c9d7132e-f877-48c6-9493-810c7fdcff0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-2gsk5_kube-system(c9d7132e-f877-48c6-9493-810c7fdcff0c)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc\\\" network for pod \\\"coredns-6d4b75cb6d-2gsk5\\\": networkPlugin cni failed to set up pod \\\"coredns-6d4b75cb6d-2gsk5_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc\\\" network for pod \\\"coredns-6d4b75cb6d-2gsk5\\\": networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-2gsk5_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.27 -j CNI-98e1965eb390c52cc53cc1cc -m comment --comment name: \\\"crio\\\" id: \\\"c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-98e1965eb390c52cc53cc1cc':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-6d4b75cb6d-2gsk5" podUID=c9d7132e-f877-48c6-9493-810c7fdcff0c
	Jun 29 19:13:36 newest-cni-20220629121133-24356 kubelet[3985]: I0629 19:13:36.255743    3985 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e7d4ce0a9f39ac4eabc40df88a9ccb935478db0f6f42542cf5c46e8a75c5cf1d"
	Jun 29 19:13:36 newest-cni-20220629121133-24356 kubelet[3985]: I0629 19:13:36.274975    3985 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c23a564d0853b8525a7173e25a9c843104da26133d176eea040b63faf02040fc"
	
	* 
	* ==> storage-provisioner [832875ac5455] <==
	* I0629 19:12:46.262104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:12:46.273799       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:12:46.273852       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:13:23.422546       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:13:23.422699       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5b9cc85e-b026-47a8-8664-6ebffd6b3f3b", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220629121133-24356_bed6bbe4-d19c-4bda-b40c-ba33e906122d became leader
	I0629 19:13:23.422940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220629121133-24356_bed6bbe4-d19c-4bda-b40c-ba33e906122d!
	I0629 19:13:23.525158       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220629121133-24356_bed6bbe4-d19c-4bda-b40c-ba33e906122d!
	
	* 
	* ==> storage-provisioner [b9102467e462] <==
	* I0629 19:12:16.998671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0629 19:12:17.007477       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0629 19:12:17.007541       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0629 19:12:17.016576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0629 19:12:17.016763       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220629121133-24356_d24149b3-3086-417b-9be2-a4f9c0c96904!
	I0629 19:12:17.017043       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5b9cc85e-b026-47a8-8664-6ebffd6b3f3b", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220629121133-24356_d24149b3-3086-417b-9be2-a4f9c0c96904 became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220629121133-24356 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220629121133-24356 describe pod coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220629121133-24356 describe pod coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t: exit status 1 (276.146955ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-2gsk5" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-44k7n" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-vd4rr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-2jh4t" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220629121133-24356 describe pod coredns-6d4b75cb6d-2gsk5 metrics-server-5c6f97fb75-44k7n dashboard-metrics-scraper-dffd48c4c-vd4rr kubernetes-dashboard-5fd5574d9f-2jh4t: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (48.76s)

                                                
                                    

Test pass (249/289)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 35.16
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.24.2/json-events 6.92
11 TestDownloadOnly/v1.24.2/preload-exists 0
14 TestDownloadOnly/v1.24.2/kubectl 0
15 TestDownloadOnly/v1.24.2/LogsDuration 0.31
16 TestDownloadOnly/DeleteAll 2.09
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.43
18 TestDownloadOnlyKic 7.59
19 TestBinaryMirror 1.69
20 TestOffline 50.65
22 TestAddons/Setup 169.74
26 TestAddons/parallel/MetricsServer 5.57
27 TestAddons/parallel/HelmTiller 11.24
29 TestAddons/parallel/CSI 40.68
30 TestAddons/parallel/Headlamp 10.25
32 TestAddons/serial/GCPAuth 19.63
33 TestAddons/StoppedEnableDisable 12.92
34 TestCertOptions 37.03
35 TestCertExpiration 240.41
36 TestDockerFlags 34.24
37 TestForceSystemdFlag 37.37
38 TestForceSystemdEnv 35.63
40 TestHyperKitDriverInstallOrUpdate 7.56
43 TestErrorSpam/setup 29.26
44 TestErrorSpam/start 2.11
45 TestErrorSpam/status 1.33
46 TestErrorSpam/pause 1.9
47 TestErrorSpam/unpause 1.91
48 TestErrorSpam/stop 13.09
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 49.99
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 40.04
55 TestFunctional/serial/KubeContext 0.03
56 TestFunctional/serial/KubectlGetPods 1.64
59 TestFunctional/serial/CacheCmd/cache/add_remote 9.34
60 TestFunctional/serial/CacheCmd/cache/add_local 1.92
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
62 TestFunctional/serial/CacheCmd/cache/list 0.07
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.46
64 TestFunctional/serial/CacheCmd/cache/cache_reload 3.57
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.49
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.63
68 TestFunctional/serial/ExtraConfig 55.28
69 TestFunctional/serial/ComponentHealth 0.05
70 TestFunctional/serial/LogsCmd 3.37
71 TestFunctional/serial/LogsFileCmd 3.17
73 TestFunctional/parallel/ConfigCmd 0.51
74 TestFunctional/parallel/DashboardCmd 13.7
75 TestFunctional/parallel/DryRun 1.64
76 TestFunctional/parallel/InternationalLanguage 0.73
77 TestFunctional/parallel/StatusCmd 1.5
80 TestFunctional/parallel/ServiceCmd 18.88
82 TestFunctional/parallel/AddonsCmd 0.29
83 TestFunctional/parallel/PersistentVolumeClaim 27.53
85 TestFunctional/parallel/SSHCmd 0.91
86 TestFunctional/parallel/CpCmd 1.75
87 TestFunctional/parallel/MySQL 20.22
88 TestFunctional/parallel/FileSync 0.49
89 TestFunctional/parallel/CertSync 3.1
93 TestFunctional/parallel/NodeLabels 0.05
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
97 TestFunctional/parallel/Version/short 0.12
98 TestFunctional/parallel/Version/components 0.76
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
103 TestFunctional/parallel/ImageCommands/ImageBuild 5.78
104 TestFunctional/parallel/ImageCommands/Setup 4.19
105 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.52
106 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.89
107 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.25
108 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.34
109 TestFunctional/parallel/DockerEnv/bash 1.98
110 TestFunctional/parallel/ImageCommands/ImageRemove 0.78
111 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.92
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.39
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.45
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.37
115 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.82
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.17
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.65
127 TestFunctional/parallel/ProfileCmd/profile_list 0.53
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.64
129 TestFunctional/parallel/MountCmd/any-port 11.94
130 TestFunctional/parallel/MountCmd/specific-port 2.77
131 TestFunctional/delete_addon-resizer_images 0.19
132 TestFunctional/delete_my-image_image 0.07
133 TestFunctional/delete_minikube_cached_images 0.07
143 TestJSONOutput/start/Command 44.78
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.67
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.66
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 12.33
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.78
168 TestKicCustomNetwork/create_custom_network 32.2
169 TestKicCustomNetwork/use_default_bridge_network 32.65
170 TestKicExistingNetwork 32.93
171 TestKicCustomSubnet 34.51
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 68.85
176 TestMountStart/serial/StartWithMountFirst 7.74
177 TestMountStart/serial/VerifyMountFirst 0.43
178 TestMountStart/serial/StartWithMountSecond 7.71
179 TestMountStart/serial/VerifyMountSecond 0.44
180 TestMountStart/serial/DeleteFirst 2.28
181 TestMountStart/serial/VerifyMountPostDelete 0.43
182 TestMountStart/serial/Stop 1.64
183 TestMountStart/serial/RestartStopped 5.6
184 TestMountStart/serial/VerifyMountPostStop 0.43
187 TestMultiNode/serial/FreshStart2Nodes 96.06
188 TestMultiNode/serial/DeployApp2Nodes 9.24
189 TestMultiNode/serial/PingHostFrom2Pods 0.85
190 TestMultiNode/serial/AddNode 37.11
191 TestMultiNode/serial/ProfileList 0.53
192 TestMultiNode/serial/CopyFile 16.91
193 TestMultiNode/serial/StopNode 14.22
194 TestMultiNode/serial/StartAfterStop 19.93
195 TestMultiNode/serial/RestartKeepsNodes 110.04
196 TestMultiNode/serial/DeleteNode 18.71
197 TestMultiNode/serial/StopMultiNode 25.13
198 TestMultiNode/serial/RestartMultiNode 57.59
199 TestMultiNode/serial/ValidateNameConflict 34.06
205 TestScheduledStopUnix 104.39
206 TestSkaffold 70
208 TestInsufficientStorage 13.01
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.46
225 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.74
226 TestStoppedBinaryUpgrade/Setup 0.75
228 TestStoppedBinaryUpgrade/MinikubeLogs 3.59
230 TestPause/serial/Start 44.63
231 TestPause/serial/SecondStartNoReconfiguration 42.07
232 TestPause/serial/Pause 0.73
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
243 TestNoKubernetes/serial/StartWithK8s 30.49
244 TestNoKubernetes/serial/StartWithStopK8s 17.45
245 TestNoKubernetes/serial/Start 6.64
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
247 TestNoKubernetes/serial/ProfileList 1.56
248 TestNoKubernetes/serial/Stop 1.68
249 TestNoKubernetes/serial/StartNoArgs 4.44
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
251 TestNetworkPlugins/group/auto/Start 53.75
252 TestNetworkPlugins/group/auto/KubeletFlags 0.46
253 TestNetworkPlugins/group/auto/NetCatPod 15.6
254 TestNetworkPlugins/group/auto/DNS 0.11
255 TestNetworkPlugins/group/auto/Localhost 0.11
256 TestNetworkPlugins/group/auto/HairPin 5.1
257 TestNetworkPlugins/group/kindnet/Start 51.39
258 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
259 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
260 TestNetworkPlugins/group/kindnet/NetCatPod 15.6
261 TestNetworkPlugins/group/kindnet/DNS 0.12
262 TestNetworkPlugins/group/kindnet/Localhost 0.1
263 TestNetworkPlugins/group/kindnet/HairPin 0.11
264 TestNetworkPlugins/group/cilium/Start 81.66
265 TestNetworkPlugins/group/cilium/ControllerPod 5.02
266 TestNetworkPlugins/group/calico/Start 77.64
267 TestNetworkPlugins/group/cilium/KubeletFlags 0.62
268 TestNetworkPlugins/group/cilium/NetCatPod 15.56
269 TestNetworkPlugins/group/cilium/DNS 0.13
270 TestNetworkPlugins/group/cilium/Localhost 0.11
271 TestNetworkPlugins/group/cilium/HairPin 0.11
272 TestNetworkPlugins/group/false/Start 45.87
273 TestNetworkPlugins/group/false/KubeletFlags 0.47
274 TestNetworkPlugins/group/false/NetCatPod 15
275 TestNetworkPlugins/group/calico/ControllerPod 5.02
276 TestNetworkPlugins/group/false/DNS 0.13
277 TestNetworkPlugins/group/false/Localhost 0.1
278 TestNetworkPlugins/group/false/HairPin 5.12
279 TestNetworkPlugins/group/calico/KubeletFlags 0.51
280 TestNetworkPlugins/group/calico/NetCatPod 15.72
281 TestNetworkPlugins/group/bridge/Start 51.25
282 TestNetworkPlugins/group/calico/DNS 0.12
283 TestNetworkPlugins/group/calico/Localhost 0.12
284 TestNetworkPlugins/group/calico/HairPin 0.12
285 TestNetworkPlugins/group/enable-default-cni/Start 83.22
286 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
287 TestNetworkPlugins/group/bridge/NetCatPod 14.88
288 TestNetworkPlugins/group/bridge/DNS 0.13
289 TestNetworkPlugins/group/bridge/Localhost 0.11
290 TestNetworkPlugins/group/bridge/HairPin 0.1
291 TestNetworkPlugins/group/kubenet/Start 45.18
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.39
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
297 TestNetworkPlugins/group/kubenet/KubeletFlags 0.47
298 TestNetworkPlugins/group/kubenet/NetCatPod 16.13
301 TestNetworkPlugins/group/kubenet/DNS 0.14
302 TestNetworkPlugins/group/kubenet/Localhost 0.12
305 TestStartStop/group/no-preload/serial/FirstStart 58.51
306 TestStartStop/group/no-preload/serial/DeployApp 12.75
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.77
308 TestStartStop/group/no-preload/serial/Stop 12.54
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
310 TestStartStop/group/no-preload/serial/SecondStart 300.59
313 TestStartStop/group/old-k8s-version/serial/Stop 1.64
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.33
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 19.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.56
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.53
321 TestStartStop/group/embed-certs/serial/FirstStart 47.61
322 TestStartStop/group/embed-certs/serial/DeployApp 12.72
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
324 TestStartStop/group/embed-certs/serial/Stop 12.58
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
326 TestStartStop/group/embed-certs/serial/SecondStart 298.99
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.02
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.82
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.48
333 TestStartStop/group/default-k8s-different-port/serial/FirstStart 83.39
334 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.7
335 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.75
336 TestStartStop/group/default-k8s-different-port/serial/Stop 12.59
337 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.34
338 TestStartStop/group/default-k8s-different-port/serial/SecondStart 300.25
339 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 14.02
340 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.88
341 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.5
345 TestStartStop/group/newest-cni/serial/FirstStart 42.19
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.64
348 TestStartStop/group/newest-cni/serial/Stop 12.66
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
350 TestStartStop/group/newest-cni/serial/SecondStart 19.48
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
x
+
TestDownloadOnly/v1.16.0/json-events (35.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220629105213-24356 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220629105213-24356 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (35.15999061s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (35.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220629105213-24356
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220629105213-24356: exit status 85 (291.737714ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| Command |                Args                | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 29 Jun 22 10:52 PDT |          |
	|         | download-only-20220629105213-24356 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |          |         |         |                     |          |
	|         | --container-runtime=docker         |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 10:52:13
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 10:52:13.609036   24368 out.go:296] Setting OutFile to fd 1 ...
	I0629 10:52:13.609274   24368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 10:52:13.609279   24368 out.go:309] Setting ErrFile to fd 2...
	I0629 10:52:13.609283   24368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 10:52:13.609628   24368 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	W0629 10:52:13.609724   24368 root.go:307] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: no such file or directory
	I0629 10:52:13.610158   24368 out.go:303] Setting JSON to true
	I0629 10:52:13.626730   24368 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6701,"bootTime":1656518432,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 10:52:13.626809   24368 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 10:52:13.652648   24368 out.go:97] [download-only-20220629105213-24356] minikube v1.26.0 on Darwin 12.4
	I0629 10:52:13.652754   24368 notify.go:193] Checking for updates...
	I0629 10:52:13.672519   24368 out.go:169] MINIKUBE_LOCATION=14420
	W0629 10:52:13.652792   24368 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball: no such file or directory
	I0629 10:52:13.717899   24368 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 10:52:13.738756   24368 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 10:52:13.759877   24368 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 10:52:13.780954   24368 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	W0629 10:52:13.822771   24368 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0629 10:52:13.822980   24368 driver.go:360] Setting default libvirt URI to qemu:///system
	W0629 10:52:13.883995   24368 docker.go:113] docker version returned error: exit status 1
	I0629 10:52:13.905723   24368 out.go:97] Using the docker driver based on user configuration
	I0629 10:52:13.905780   24368 start.go:284] selected driver: docker
	I0629 10:52:13.905787   24368 start.go:808] validating driver "docker" against <nil>
	I0629 10:52:13.905918   24368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 10:52:14.022488   24368 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 10:52:14.043990   24368 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0629 10:52:14.065099   24368 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0629 10:52:14.106855   24368 out.go:169] 
	W0629 10:52:14.128121   24368 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0629 10:52:14.149032   24368 out.go:169] 
	I0629 10:52:14.191048   24368 out.go:169] 
	W0629 10:52:14.211981   24368 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0629 10:52:14.212096   24368 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0629 10:52:14.212137   24368 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0629 10:52:14.232828   24368 out.go:169] 
	I0629 10:52:14.254103   24368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 10:52:14.368875   24368 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SB
OM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0629 10:52:14.390741   24368 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0629 10:52:14.390816   24368 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0629 10:52:14.435417   24368 out.go:169] 
	W0629 10:52:14.456477   24368 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0629 10:52:14.456607   24368 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0629 10:52:14.456636   24368 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0629 10:52:14.477220   24368 out.go:169] 
	I0629 10:52:14.519485   24368 out.go:169] 
	W0629 10:52:14.540315   24368 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0629 10:52:14.561371   24368 out.go:169] 
	I0629 10:52:14.582378   24368 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0629 10:52:14.582484   24368 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0629 10:52:14.603275   24368 out.go:169] Using Docker Desktop driver with root privileges
	I0629 10:52:14.624384   24368 cni.go:95] Creating CNI manager for ""
	I0629 10:52:14.624418   24368 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 10:52:14.624435   24368 start_flags.go:310] config:
	{Name:download-only-20220629105213-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220629105213-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 10:52:14.645381   24368 out.go:97] Starting control plane node download-only-20220629105213-24356 in cluster download-only-20220629105213-24356
	I0629 10:52:14.645407   24368 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 10:52:14.666156   24368 out.go:97] Pulling base image ...
	I0629 10:52:14.666217   24368 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 10:52:14.666252   24368 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 10:52:14.666407   24368 cache.go:107] acquiring lock: {Name:mkc37f8d0e96011347ac9c73f3e44a2eb3154087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.666412   24368 cache.go:107] acquiring lock: {Name:mkbd8a6fc3e17869a597322ba73356af248916dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.666479   24368 cache.go:107] acquiring lock: {Name:mk00b79ede01814d599dd69909404efd970fa706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.666959   24368 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/download-only-20220629105213-24356/config.json ...
	I0629 10:52:14.667283   24368 cache.go:107] acquiring lock: {Name:mk3c5e9e281781e3cbb4925b5f02e00feb7150cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.667371   24368 cache.go:107] acquiring lock: {Name:mk8b096d1e1ae8a147b9028a5b933305a61bccb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.667440   24368 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/download-only-20220629105213-24356/config.json: {Name:mk19a50b86f30226a34456c683434b7d9a29e6a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0629 10:52:14.667573   24368 cache.go:107] acquiring lock: {Name:mk7f4054e23b3b47debb16dd6047a74be2024aae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.667502   24368 cache.go:107] acquiring lock: {Name:mk4eaa98e5da530e0aebdcece010245512199b4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.667662   24368 cache.go:107] acquiring lock: {Name:mk66e98bc8099b05e4110c11eccc18f02a9a5254 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0629 10:52:14.668410   24368 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0629 10:52:14.668416   24368 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0629 10:52:14.668414   24368 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0629 10:52:14.668418   24368 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0629 10:52:14.668428   24368 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0629 10:52:14.668425   24368 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0629 10:52:14.668445   24368 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0629 10:52:14.668478   24368 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0629 10:52:14.668626   24368 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0629 10:52:14.669107   24368 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0629 10:52:14.669109   24368 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0629 10:52:14.669108   24368 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0629 10:52:14.675303   24368 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.675624   24368 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.676565   24368 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.677343   24368 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.677419   24368 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.677681   24368 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.678042   24368 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.678174   24368 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix /Users/jenkins/Library/Containers/com.docker.docker/Data/docker.raw.sock: connect: connection refused
	I0629 10:52:14.729443   24368 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 10:52:14.729631   24368 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory
	I0629 10:52:14.729761   24368 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 10:52:17.280058   24368 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0629 10:52:21.393078   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0629 10:52:21.550880   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0629 10:52:21.553618   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0629 10:52:21.555485   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0629 10:52:21.556529   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0629 10:52:21.558329   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0629 10:52:21.560708   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0629 10:52:21.597474   24368 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0629 10:52:21.804787   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0629 10:52:21.804802   24368 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 7.138369084s
	I0629 10:52:21.804816   24368 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0629 10:52:21.948519   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0629 10:52:21.948553   24368 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 7.28204701s
	I0629 10:52:21.948562   24368 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0629 10:52:23.437533   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0629 10:52:23.437548   24368 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 8.770248881s
	I0629 10:52:23.437558   24368 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0629 10:52:24.249865   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0629 10:52:24.249880   24368 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 9.582750849s
	I0629 10:52:24.249891   24368 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0629 10:52:24.675379   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0629 10:52:24.675398   24368 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 10.007866732s
	I0629 10:52:24.675407   24368 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0629 10:52:24.881177   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0629 10:52:24.881192   24368 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 10.214741265s
	I0629 10:52:24.881201   24368 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0629 10:52:24.924356   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0629 10:52:24.924371   24368 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 10.257885102s
	I0629 10:52:24.924379   24368 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0629 10:52:25.566187   24368 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0629 10:52:25.566203   24368 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 10.898877378s
	I0629 10:52:25.566212   24368 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0629 10:52:25.566226   24368 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220629105213-24356"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/json-events (6.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220629105213-24356 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220629105213-24356 --force --alsologtostderr --kubernetes-version=v1.24.2 --container-runtime=docker --driver=docker : (6.919196473s)
--- PASS: TestDownloadOnly/v1.24.2/json-events (6.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/preload-exists
--- PASS: TestDownloadOnly/v1.24.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/kubectl
--- PASS: TestDownloadOnly/v1.24.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220629105213-24356
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220629105213-24356: exit status 85 (307.332ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| Command |                Args                | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 29 Jun 22 10:52 PDT |          |
	|         | download-only-20220629105213-24356 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |          |         |         |                     |          |
	|         | --container-runtime=docker         |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	| start   | -o=json --download-only -p         | minikube | jenkins | v1.26.0 | 29 Jun 22 10:52 PDT |          |
	|         | download-only-20220629105213-24356 |          |         |         |                     |          |
	|         | --force --alsologtostderr          |          |         |         |                     |          |
	|         | --kubernetes-version=v1.24.2       |          |         |         |                     |          |
	|         | --container-runtime=docker         |          |         |         |                     |          |
	|         | --driver=docker                    |          |         |         |                     |          |
	|---------|------------------------------------|----------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/29 10:52:49
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0629 10:52:49.289228   24930 out.go:296] Setting OutFile to fd 1 ...
	I0629 10:52:49.289388   24930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 10:52:49.289393   24930 out.go:309] Setting ErrFile to fd 2...
	I0629 10:52:49.289397   24930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 10:52:49.289723   24930 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	W0629 10:52:49.289812   24930 root.go:307] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/config/config.json: no such file or directory
	I0629 10:52:49.289952   24930 out.go:303] Setting JSON to true
	I0629 10:52:49.304715   24930 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6737,"bootTime":1656518432,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 10:52:49.304828   24930 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 10:52:49.326616   24930 out.go:97] [download-only-20220629105213-24356] minikube v1.26.0 on Darwin 12.4
	I0629 10:52:49.326808   24930 notify.go:193] Checking for updates...
	W0629 10:52:49.326907   24930 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball: no such file or directory
	I0629 10:52:49.348047   24930 out.go:169] MINIKUBE_LOCATION=14420
	I0629 10:52:49.369490   24930 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 10:52:49.391591   24930 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 10:52:49.413314   24930 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 10:52:49.434428   24930 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	W0629 10:52:49.478146   24930 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0629 10:52:49.478793   24930 config.go:178] Loaded profile config "download-only-20220629105213-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0629 10:52:49.478867   24930 start.go:716] api.Load failed for download-only-20220629105213-24356: filestore "download-only-20220629105213-24356": Docker machine "download-only-20220629105213-24356" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0629 10:52:49.478932   24930 driver.go:360] Setting default libvirt URI to qemu:///system
	W0629 10:52:49.478964   24930 start.go:716] api.Load failed for download-only-20220629105213-24356: filestore "download-only-20220629105213-24356": Docker machine "download-only-20220629105213-24356" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0629 10:52:49.545872   24930 docker.go:137] docker version: linux-20.10.16
	I0629 10:52:49.545985   24930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 10:52:49.665608   24930 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-06-29 17:52:49.618179102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 10:52:49.687807   24930 out.go:97] Using the docker driver based on existing profile
	I0629 10:52:49.687850   24930 start.go:284] selected driver: docker
	I0629 10:52:49.687861   24930 start.go:808] validating driver "docker" against &{Name:download-only-20220629105213-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220629105213-24356 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 10:52:49.688125   24930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 10:52:49.808055   24930 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-06-29 17:52:49.760839071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 10:52:49.810162   24930 cni.go:95] Creating CNI manager for ""
	I0629 10:52:49.810178   24930 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0629 10:52:49.810201   24930 start_flags.go:310] config:
	{Name:download-only-20220629105213-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:download-only-20220629105213-24356 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 10:52:49.832336   24930 out.go:97] Starting control plane node download-only-20220629105213-24356 in cluster download-only-20220629105213-24356
	I0629 10:52:49.832414   24930 cache.go:120] Beginning downloading kic base image for docker with docker
	I0629 10:52:49.853823   24930 out.go:97] Pulling base image ...
	I0629 10:52:49.853915   24930 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 10:52:49.854051   24930 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
	I0629 10:52:49.917418   24930 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e to local cache
	I0629 10:52:49.917577   24930 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	I0629 10:52:49.917590   24930 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory
	I0629 10:52:49.917591   24930 cache.go:57] Caching tarball of preloaded images
	I0629 10:52:49.917611   24930 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local cache directory, skipping pull
	I0629 10:52:49.917618   24930 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in cache, skipping pull
	I0629 10:52:49.917626   24930 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e as a tarball
	I0629 10:52:49.917762   24930 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
	I0629 10:52:49.939874   24930 out.go:97] Downloading Kubernetes v1.24.2 preload ...
	I0629 10:52:49.939961   24930 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 ...
	I0629 10:52:50.071660   24930 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.2/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4?checksum=md5:015c5bcd220ede3ee64238beb9734721 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220629105213-24356"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.2/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (2.09s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-darwin-amd64 delete --all: (2.090444436s)
--- PASS: TestDownloadOnly/DeleteAll (2.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220629105213-24356
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220629105259-24356 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220629105259-24356 --force --alsologtostderr --driver=docker : (6.421733001s)
helpers_test.go:175: Cleaning up "download-docker-20220629105259-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220629105259-24356
--- PASS: TestDownloadOnlyKic (7.59s)

                                                
                                    
x
+
TestBinaryMirror (1.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220629105307-24356 --alsologtostderr --binary-mirror http://127.0.0.1:64680 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220629105307-24356 --alsologtostderr --binary-mirror http://127.0.0.1:64680 --driver=docker : (1.01919603s)
helpers_test.go:175: Cleaning up "binary-mirror-20220629105307-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220629105307-24356
--- PASS: TestBinaryMirror (1.69s)

                                                
                                    
x
+
TestOffline (50.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220629112950-24356 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220629112950-24356 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (47.855205036s)
helpers_test.go:175: Cleaning up "offline-docker-20220629112950-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220629112950-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220629112950-24356: (2.792554879s)
--- PASS: TestOffline (50.65s)

                                                
                                    
x
+
TestAddons/Setup (169.74s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220629105308-24356 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220629105308-24356 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m49.742465886s)
--- PASS: TestAddons/Setup (169.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.579279ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-8595bd7d4c-qcb4d" [11aa13d6-c984-4e6a-9d9c-477a7fda5a13] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.051806278s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220629105308-24356 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220629105308-24356 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.835804ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-mr7d8" [e6cb761d-9bb7-4b73-9438-b017fdebb7e3] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008627408s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220629105308-24356 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220629105308-24356 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.732999812s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220629105308-24356 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 5.168551ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220629105308-24356 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:516: (dbg) Done: kubectl --context addons-20220629105308-24356 create -f testdata/csi-hostpath-driver/pvc.yaml: (3.006383196s)
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629105308-24356 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220629105308-24356 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [a5e46c7d-e69f-4bc0-a167-e4497daf2e03] Pending
helpers_test.go:342: "task-pv-pod" [a5e46c7d-e69f-4bc0-a167-e4497daf2e03] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [a5e46c7d-e69f-4bc0-a167-e4497daf2e03] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.010442599s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220629105308-24356 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220629105308-24356 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220629105308-24356 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220629105308-24356 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220629105308-24356 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220629105308-24356 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220629105308-24356 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220629105308-24356 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [c693e4b1-781a-4f69-9482-e17625bcfcb9] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [c693e4b1-781a-4f69-9482-e17625bcfcb9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [c693e4b1-781a-4f69-9482-e17625bcfcb9] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.009779657s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220629105308-24356 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220629105308-24356 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220629105308-24356 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220629105308-24356 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220629105308-24356 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.892695803s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220629105308-24356 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-20220629105308-24356 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-20220629105308-24356 --alsologtostderr -v=1: (1.238316812s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-zt5vb" [ac1869a9-6181-492b-9792-a78c5c5f7ac3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-866f5bd7bc-zt5vb" [ac1869a9-6181-492b-9792-a78c5c5f7ac3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.009298372s
--- PASS: TestAddons/parallel/Headlamp (10.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (19.63s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220629105308-24356 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220629105308-24356 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [d69d31d4-986a-4932-87cc-2d7e1813aa8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [d69d31d4-986a-4932-87cc-2d7e1813aa8f] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 12.009745161s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220629105308-24356 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220629105308-24356 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220629105308-24356 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220629105308-24356 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220629105308-24356 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220629105308-24356 addons disable gcp-auth --alsologtostderr -v=1: (6.680506817s)
--- PASS: TestAddons/serial/GCPAuth (19.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220629105308-24356
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220629105308-24356: (12.523907798s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220629105308-24356
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220629105308-24356
--- PASS: TestAddons/StoppedEnableDisable (12.92s)

                                                
                                    
x
+
TestCertOptions (37.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220629113128-24356 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220629113128-24356 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (33.287396325s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220629113128-24356 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220629113128-24356 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220629113128-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220629113128-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220629113128-24356: (2.721068579s)
--- PASS: TestCertOptions (37.03s)

                                                
                                    
x
+
TestCertExpiration (240.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220629113118-24356 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220629113118-24356 --memory=2048 --cert-expiration=3m --driver=docker : (31.073008118s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220629113118-24356 --memory=2048 --cert-expiration=8760h --driver=docker 
E0629 11:35:05.297949   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220629113118-24356 --memory=2048 --cert-expiration=8760h --driver=docker : (26.620531144s)
helpers_test.go:175: Cleaning up "cert-expiration-20220629113118-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220629113118-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220629113118-24356: (2.716267574s)
--- PASS: TestCertExpiration (240.41s)

                                                
                                    
x
+
TestDockerFlags (34.24s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220629113054-24356 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0629 11:30:58.493021   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:31:07.725425   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220629113054-24356 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (30.325317034s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220629113054-24356 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220629113054-24356 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220629113054-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220629113054-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220629113054-24356: (2.826252714s)
--- PASS: TestDockerFlags (34.24s)

                                                
                                    
x
+
TestForceSystemdFlag (37.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220629113041-24356 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220629113041-24356 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (34.085416945s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220629113041-24356 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220629113041-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220629113041-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220629113041-24356: (2.758473271s)
--- PASS: TestForceSystemdFlag (37.37s)

                                                
                                    
x
+
TestForceSystemdEnv (35.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220629113018-24356 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220629113018-24356 --memory=2048 --alsologtostderr -v=5 --driver=docker : (32.230231148s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220629113018-24356 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220629113018-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220629113018-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220629113018-24356: (2.849603663s)
--- PASS: TestForceSystemdEnv (35.63s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.56s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.56s)

                                                
                                    
x
+
TestErrorSpam/setup (29.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220629105725-24356 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220629105725-24356 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 --driver=docker : (29.25529486s)
--- PASS: TestErrorSpam/setup (29.26s)

                                                
                                    
x
+
TestErrorSpam/start (2.11s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 start --dry-run
--- PASS: TestErrorSpam/start (2.11s)

                                                
                                    
x
+
TestErrorSpam/status (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 status
--- PASS: TestErrorSpam/status (1.33s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (13.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 stop: (12.425274125s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220629105725-24356 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220629105725-24356 stop
--- PASS: TestErrorSpam/stop (13.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/files/etc/test/nested/copy/24356/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (49.993219238s)
--- PASS: TestFunctional/serial/StartWithProxy (49.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --alsologtostderr -v=8: (40.038456s)
functional_test.go:655: soft start took 40.038937479s for "functional-20220629105817-24356" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220629105817-24356 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220629105817-24356 get po -A: (1.640571929s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add k8s.gcr.io/pause:3.1: (2.078646234s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add k8s.gcr.io/pause:3.3: (3.82105202s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add k8s.gcr.io/pause:latest: (3.438663267s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220629105817-24356 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3493119997/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add minikube-local-cache-test:functional-20220629105817-24356
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache add minikube-local-cache-test:functional-20220629105817-24356: (1.408093753s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache delete minikube-local-cache-test:functional-20220629105817-24356
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220629105817-24356
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (432.397771ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 cache reload: (2.214267547s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 kubectl -- --context functional-20220629105817-24356 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220629105817-24356 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.63s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0629 11:00:58.445547   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:58.451364   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:58.461805   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:58.481910   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:58.524102   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:58.604548   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:58.765197   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:59.110352   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:00:59.752606   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.279155723s)
functional_test.go:753: restart took 55.27930701s for "functional-20220629105817-24356" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (55.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220629105817-24356 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 logs
E0629 11:01:01.033846   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:01:03.594115   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 logs: (3.36702399s)
--- PASS: TestFunctional/serial/LogsCmd (3.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd1924398999/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd1924398999/001/logs.txt: (3.165621058s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 config get cpus: exit status 14 (68.519256ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 config get cpus: exit status 14 (49.436199ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220629105817-24356 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220629105817-24356 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 27295: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (737.611012ms)

                                                
                                                
-- stdout --
	* [functional-20220629105817-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:02:16.709827   27205 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:02:16.730448   27205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:02:16.730462   27205 out.go:309] Setting ErrFile to fd 2...
	I0629 11:02:16.730470   27205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:02:16.731274   27205 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:02:16.752389   27205 out.go:303] Setting JSON to false
	I0629 11:02:16.770195   27205 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7304,"bootTime":1656518432,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:02:16.770293   27205 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:02:16.792289   27205 out.go:177] * [functional-20220629105817-24356] minikube v1.26.0 on Darwin 12.4
	I0629 11:02:16.855275   27205 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:02:16.876349   27205 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:02:16.918230   27205 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:02:16.960113   27205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:02:17.002567   27205 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:02:17.025062   27205 config.go:178] Loaded profile config "functional-20220629105817-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:02:17.025706   27205 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:02:17.098641   27205 docker.go:137] docker version: linux-20.10.16
	I0629 11:02:17.098797   27205 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:02:17.230943   27205 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:02:17.175317642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:02:17.253332   27205 out.go:177] * Using the docker driver based on existing profile
	I0629 11:02:17.274985   27205 start.go:284] selected driver: docker
	I0629 11:02:17.275006   27205 start.go:808] validating driver "docker" against &{Name:functional-20220629105817-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629105817-24356 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:02:17.275188   27205 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:02:17.299860   27205 out.go:177] 
	W0629 11:02:17.321163   27205 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0629 11:02:17.349354   27205 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220629105817-24356 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (733.297882ms)

                                                
                                                
-- stdout --
	* [functional-20220629105817-24356] minikube v1.26.0 sur Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:02:17.685656   27237 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:02:17.685931   27237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:02:17.685942   27237 out.go:309] Setting ErrFile to fd 2...
	I0629 11:02:17.685949   27237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:02:17.686503   27237 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:02:17.686829   27237 out.go:303] Setting JSON to false
	I0629 11:02:17.703422   27237 start.go:115] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7305,"bootTime":1656518432,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0629 11:02:17.703543   27237 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0629 11:02:17.724354   27237 out.go:177] * [functional-20220629105817-24356] minikube v1.26.0 sur Darwin 12.4
	I0629 11:02:17.766402   27237 out.go:177]   - MINIKUBE_LOCATION=14420
	I0629 11:02:17.808605   27237 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	I0629 11:02:17.850308   27237 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0629 11:02:17.871610   27237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0629 11:02:17.892575   27237 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	I0629 11:02:17.914649   27237 config.go:178] Loaded profile config "functional-20220629105817-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:02:17.915366   27237 driver.go:360] Setting default libvirt URI to qemu:///system
	I0629 11:02:17.986753   27237 docker.go:137] docker version: linux-20.10.16
	I0629 11:02:17.986862   27237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0629 11:02:18.117477   27237 info.go:265] docker info: {ID:YEZN:IB64:KEY7:MCNF:3VYN:XJOR:INZ4:HGIE:5H6H:U4DW:UQTX:HH2D Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:51 SystemTime:2022-06-29 18:02:18.061052191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0629 11:02:18.160267   27237 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0629 11:02:18.181150   27237 start.go:284] selected driver: docker
	I0629 11:02:18.181174   27237 start.go:808] validating driver "docker" against &{Name:functional-20220629105817-24356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:functional-20220629105817-24356 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0629 11:02:18.181321   27237 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0629 11:02:18.205989   27237 out.go:177] 
	W0629 11:02:18.269232   27237 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0629 11:02:18.311839   27237 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 status
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (18.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220629105817-24356 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220629105817-24356 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-ddk2z" [eac204c3-2758-4b6f-b395-64ab3ed70b88] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54c4b5c49f-ddk2z" [eac204c3-2758-4b6f-b395-64ab3ed70b88] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.00989677s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 service list
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 service --namespace=default --https --url hello-node
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 service --namespace=default --https --url hello-node: (2.029927573s)
functional_test.go:1475: found endpoint: https://127.0.0.1:50045
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 service hello-node --url --format={{.IP}}: (2.02771104s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 service hello-node --url
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 service hello-node --url: (2.027350836s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:50085
--- PASS: TestFunctional/parallel/ServiceCmd (18.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [834add92-b145-4aaa-bb00-f8839be48f4c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010558109s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220629105817-24356 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220629105817-24356 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220629105817-24356 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220629105817-24356 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [5fb1fc8a-6273-4759-95c0-38409c5ca815] Pending
helpers_test.go:342: "sp-pod" [5fb1fc8a-6273-4759-95c0-38409c5ca815] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5fb1fc8a-6273-4759-95c0-38409c5ca815] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007456409s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220629105817-24356 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220629105817-24356 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220629105817-24356 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [eb544e3b-dbd1-46a8-8541-d4791e44d7c4] Pending
helpers_test.go:342: "sp-pod" [eb544e3b-dbd1-46a8-8541-d4791e44d7c4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [eb544e3b-dbd1-46a8-8541-d4791e44d7c4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.009137929s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220629105817-24356 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.53s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "echo hello"
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh -n functional-20220629105817-24356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 cp functional-20220629105817-24356:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd3923727023/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh -n functional-20220629105817-24356 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220629105817-24356 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-p968q" [636c07ae-7d11-46b4-8297-34f8b40e68b9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0629 11:01:08.714536   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-p968q" [636c07ae-7d11-46b4-8297-34f8b40e68b9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.017315607s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629105817-24356 exec mysql-67f7d69d8b-p968q -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629105817-24356 exec mysql-67f7d69d8b-p968q -- mysql -ppassword -e "show databases;": exit status 1 (135.31125ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629105817-24356 exec mysql-67f7d69d8b-p968q -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220629105817-24356 exec mysql-67f7d69d8b-p968q -- mysql -ppassword -e "show databases;": exit status 1 (142.175746ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220629105817-24356 exec mysql-67f7d69d8b-p968q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/24356/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo cat /etc/test/nested/copy/24356/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/24356.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo cat /etc/ssl/certs/24356.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/24356.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo cat /usr/share/ca-certificates/24356.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/243562.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo cat /etc/ssl/certs/243562.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/243562.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo cat /usr/share/ca-certificates/243562.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220629105817-24356 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo systemctl is-active crio"
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo systemctl is-active crio": exit status 1 (428.361538ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220629105817-24356
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-20220629105817-24356 | fb0a351f20b8d | 1.24MB |
| k8s.gcr.io/kube-proxy                       | v1.24.2                         | a634548d10b03 | 110MB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine                          | f246e6f9d0b28 | 23.5MB |
| k8s.gcr.io/kube-scheduler                   | v1.24.2                         | 5d725196c1f47 | 51MB   |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| docker.io/library/mysql                     | 5.7                             | efa50097efbde | 462MB  |
| k8s.gcr.io/pause                            | 3.7                             | 221177c6082a8 | 711kB  |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-20220629105817-24356 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-20220629105817-24356 | 2a65922a1be67 | 30B    |
| docker.io/library/nginx                     | latest                          | 55f4b40fe486a | 142MB  |
| k8s.gcr.io/kube-apiserver                   | v1.24.2                         | d3377ffb7177c | 130MB  |
| k8s.gcr.io/kube-controller-manager          | v1.24.2                         | 34cdf99b1bb3b | 119MB  |
| k8s.gcr.io/etcd                             | 3.5.3-0                         | aebe758cef4cd | 299MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|---------------------------------|---------------|--------|
2022/06/29 11:02:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format json:
[{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"efa50097efbdef5884e5ebaba4da5899e79609b78cd4fe91b365d5d9d3205188","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23500000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDig
ests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"711000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220629105817-24356"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"2a65922a1be67f0c28140fe8e1b4878dffef47d19945ec5d8c1197d60754bfd3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220629105817-24356"],"size":"30"},{"id":"55f4b40fe486a5b734b46bb7bf28f52fa31426bf23be068c8e7b19e58d9b8deb","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/b
usybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.2"],"size":"119000000"},{"id":"a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.2"],"size":"110000000"},{"id":"5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.2"],"size":"51000000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"299000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"fb0a351f20b8d40970a1a015cb0355faff236636f1026733b902c9e998
e57048","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220629105817-24356"],"size":"1240000"},{"id":"d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.2"],"size":"130000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls --format yaml:
- id: f246e6f9d0b28d6eb1f7e1f12791f23587c2c6aa42c82aba8d6fe6e2e2de9e95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: 34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.2
size: "119000000"
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: 2a65922a1be67f0c28140fe8e1b4878dffef47d19945ec5d8c1197d60754bfd3
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220629105817-24356
size: "30"
- id: d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.2
size: "130000000"
- id: a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.2
size: "110000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: efa50097efbdef5884e5ebaba4da5899e79609b78cd4fe91b365d5d9d3205188
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 55f4b40fe486a5b734b46bb7bf28f52fa31426bf23be068c8e7b19e58d9b8deb
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.2
size: "51000000"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh pgrep buildkitd: exit status 1 (414.882191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image build -t localhost/my-image:functional-20220629105817-24356 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image build -t localhost/my-image:functional-20220629105817-24356 testdata/build: (5.032557143s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image build -t localhost/my-image:functional-20220629105817-24356 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 8f8c4878eea5
Removing intermediate container 8f8c4878eea5
---> 32157a20dc60
Step 3/3 : ADD content.txt /
---> fb0a351f20b8
Successfully built fb0a351f20b8
Successfully tagged localhost/my-image:functional-20220629105817-24356
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.105403765s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356: (4.141394638s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
E0629 11:01:18.954983   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356: (2.561977629s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.947404954s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356: (2.899494798s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image save gcr.io/google-containers/addon-resizer:functional-20220629105817-24356 /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image save gcr.io/google-containers/addon-resizer:functional-20220629105817-24356 /Users/jenkins/workspace/addon-resizer-save.tar: (1.336691686s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220629105817-24356 docker-env) && out/minikube-darwin-amd64 status -p functional-20220629105817-24356"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220629105817-24356 docker-env) && out/minikube-darwin-amd64 status -p functional-20220629105817-24356": (1.260151023s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220629105817-24356 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image rm gcr.io/google-containers/addon-resizer:functional-20220629105817-24356

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.562375566s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 update-context --alsologtostderr -v=2
E0629 11:02:20.396895   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220629105817-24356 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220629105817-24356: (2.664428442s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220629105817-24356 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220629105817-24356 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [945793dd-f475-4922-9172-9e4747d454f6] Pending
helpers_test.go:342: "nginx-svc" [945793dd-f475-4922-9172-9e4747d454f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0629 11:01:39.435392   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [945793dd-f475-4922-9172-9e4747d454f6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.008130297s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220629105817-24356 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220629105817-24356 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 26881: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1310: Took "458.683132ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "75.553842ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1361: Took "514.255963ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1374: Took "121.268246ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220629105817-24356 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port737356043/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1656525722917799000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port737356043/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1656525722917799000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port737356043/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1656525722917799000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port737356043/001/test-1656525722917799000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (450.995437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 29 18:02 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 29 18:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 29 18:02 test-1656525722917799000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh cat /mount-9p/test-1656525722917799000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220629105817-24356 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [35a10fa7-ed2a-4fa4-9517-9c5883acc84f] Pending
helpers_test.go:342: "busybox-mount" [35a10fa7-ed2a-4fa4-9517-9c5883acc84f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [35a10fa7-ed2a-4fa4-9517-9c5883acc84f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [35a10fa7-ed2a-4fa4-9517-9c5883acc84f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.008739684s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220629105817-24356 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220629105817-24356 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port737356043/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220629105817-24356 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1895567633/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (463.237686ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220629105817-24356 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1895567633/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh "sudo umount -f /mount-9p": exit status 1 (740.384782ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220629105817-24356 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220629105817-24356 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port1895567633/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.77s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220629105817-24356
--- PASS: TestFunctional/delete_addon-resizer_images (0.19s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220629105817-24356
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220629105817-24356
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220629110953-24356 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220629110953-24356 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (44.780126788s)
--- PASS: TestJSONOutput/start/Command (44.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220629110953-24356 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220629110953-24356 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220629110953-24356 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220629110953-24356 --output=json --user=testUser: (12.332844643s)
--- PASS: TestJSONOutput/stop/Command (12.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220629111054-24356 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220629111054-24356 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (333.787828ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30ba1110-b908-4415-bde4-8fb51fb7d3e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220629111054-24356] minikube v1.26.0 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e00587c-6d32-44c2-a65e-04aa55b33699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14420"}}
	{"specversion":"1.0","id":"a33f3939-0508-45ae-bccf-f37890a6244d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig"}}
	{"specversion":"1.0","id":"1aacb361-b952-411d-be7b-73736df4a7e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a7c7633d-022f-4848-9ae5-17d425dedc1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f6b2dc15-1da2-480f-a7bf-6c8dee409369","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube"}}
	{"specversion":"1.0","id":"dcf4381f-3434-43d8-886a-aaf27ec685f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220629111054-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220629111054-24356
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220629111055-24356 --network=
E0629 11:10:58.449548   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:11:07.681858   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220629111055-24356 --network=: (29.419928023s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220629111055-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220629111055-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220629111055-24356: (2.715402682s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220629111127-24356 --network=bridge
E0629 11:11:35.382451   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220629111127-24356 --network=bridge: (30.06022933s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220629111127-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220629111127-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220629111127-24356: (2.524067396s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.65s)

                                                
                                    
x
+
TestKicExistingNetwork (32.93s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220629111200-24356 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220629111200-24356 --network=existing-network: (30.006874859s)
helpers_test.go:175: Cleaning up "existing-network-20220629111200-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220629111200-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220629111200-24356: (2.504669019s)
--- PASS: TestKicExistingNetwork (32.93s)

                                                
                                    
x
+
TestKicCustomSubnet (34.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220629111233-24356 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220629111233-24356 --subnet=192.168.60.0/24: (31.764716187s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220629111233-24356 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220629111233-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220629111233-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220629111233-24356: (2.678583798s)
--- PASS: TestKicCustomSubnet (34.51s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.85s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220629111307-24356 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220629111307-24356 --driver=docker : (29.273514091s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220629111307-24356 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220629111307-24356 --driver=docker : (32.073475117s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220629111307-24356
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220629111307-24356
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220629111307-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220629111307-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220629111307-24356: (2.704019337s)
helpers_test.go:175: Cleaning up "first-20220629111307-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220629111307-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220629111307-24356: (2.742167394s)
--- PASS: TestMinikubeProfile (68.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220629111416-24356 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220629111416-24356 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.740993869s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220629111416-24356 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220629111416-24356 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220629111416-24356 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.701251326s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220629111416-24356 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220629111416-24356 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220629111416-24356 --alsologtostderr -v=5: (2.273671237s)
--- PASS: TestMountStart/serial/DeleteFirst (2.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220629111416-24356 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220629111416-24356
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220629111416-24356: (1.637948088s)
--- PASS: TestMountStart/serial/Stop (1.64s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220629111416-24356
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220629111416-24356: (4.597953846s)
--- PASS: TestMountStart/serial/RestartStopped (5.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220629111416-24356 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220629111446-24356 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0629 11:15:58.460135   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:16:07.695462   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220629111446-24356 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m35.297345385s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.720182801s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- rollout status deployment/busybox: (6.149155118s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-7fwr5 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-bgbrm -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-7fwr5 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-bgbrm -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-7fwr5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-bgbrm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-7fwr5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-7fwr5 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-bgbrm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220629111446-24356 -- exec busybox-d46db594c-bgbrm -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220629111446-24356 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220629111446-24356 -v 3 --alsologtostderr: (36.005506715s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr: (1.107999419s)
--- PASS: TestMultiNode/serial/AddNode (37.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.53s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (16.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --output json --alsologtostderr: (1.165557622s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp testdata/cp-test.txt multinode-20220629111446-24356:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3950791411/001/cp-test_multinode-20220629111446-24356.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356:/home/docker/cp-test.txt multinode-20220629111446-24356-m02:/home/docker/cp-test_multinode-20220629111446-24356_multinode-20220629111446-24356-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m02 "sudo cat /home/docker/cp-test_multinode-20220629111446-24356_multinode-20220629111446-24356-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356:/home/docker/cp-test.txt multinode-20220629111446-24356-m03:/home/docker/cp-test_multinode-20220629111446-24356_multinode-20220629111446-24356-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m03 "sudo cat /home/docker/cp-test_multinode-20220629111446-24356_multinode-20220629111446-24356-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp testdata/cp-test.txt multinode-20220629111446-24356-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3950791411/001/cp-test_multinode-20220629111446-24356-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356-m02:/home/docker/cp-test.txt multinode-20220629111446-24356:/home/docker/cp-test_multinode-20220629111446-24356-m02_multinode-20220629111446-24356.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356 "sudo cat /home/docker/cp-test_multinode-20220629111446-24356-m02_multinode-20220629111446-24356.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356-m02:/home/docker/cp-test.txt multinode-20220629111446-24356-m03:/home/docker/cp-test_multinode-20220629111446-24356-m02_multinode-20220629111446-24356-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m03 "sudo cat /home/docker/cp-test_multinode-20220629111446-24356-m02_multinode-20220629111446-24356-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp testdata/cp-test.txt multinode-20220629111446-24356-m03:/home/docker/cp-test.txt
E0629 11:17:21.536238   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3950791411/001/cp-test_multinode-20220629111446-24356-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356-m03:/home/docker/cp-test.txt multinode-20220629111446-24356:/home/docker/cp-test_multinode-20220629111446-24356-m03_multinode-20220629111446-24356.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356 "sudo cat /home/docker/cp-test_multinode-20220629111446-24356-m03_multinode-20220629111446-24356.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 cp multinode-20220629111446-24356-m03:/home/docker/cp-test.txt multinode-20220629111446-24356-m02:/home/docker/cp-test_multinode-20220629111446-24356-m03_multinode-20220629111446-24356-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 ssh -n multinode-20220629111446-24356-m02 "sudo cat /home/docker/cp-test_multinode-20220629111446-24356-m03_multinode-20220629111446-24356-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (16.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 node stop m03: (12.531381089s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status: exit status 7 (840.632374ms)

                                                
                                                
-- stdout --
	multinode-20220629111446-24356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220629111446-24356-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220629111446-24356-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr: exit status 7 (842.419721ms)

                                                
                                                
-- stdout --
	multinode-20220629111446-24356
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220629111446-24356-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220629111446-24356-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:17:40.211909   30855 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:17:40.212116   30855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:17:40.212122   30855 out.go:309] Setting ErrFile to fd 2...
	I0629 11:17:40.212125   30855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:17:40.212225   30855 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:17:40.212392   30855 out.go:303] Setting JSON to false
	I0629 11:17:40.212406   30855 mustload.go:65] Loading cluster: multinode-20220629111446-24356
	I0629 11:17:40.212681   30855 config.go:178] Loaded profile config "multinode-20220629111446-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:17:40.212692   30855 status.go:253] checking status of multinode-20220629111446-24356 ...
	I0629 11:17:40.213056   30855 cli_runner.go:164] Run: docker container inspect multinode-20220629111446-24356 --format={{.State.Status}}
	I0629 11:17:40.283797   30855 status.go:328] multinode-20220629111446-24356 host status = "Running" (err=<nil>)
	I0629 11:17:40.283829   30855 host.go:66] Checking if "multinode-20220629111446-24356" exists ...
	I0629 11:17:40.284120   30855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629111446-24356
	I0629 11:17:40.354864   30855 host.go:66] Checking if "multinode-20220629111446-24356" exists ...
	I0629 11:17:40.355169   30855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:17:40.355236   30855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629111446-24356
	I0629 11:17:40.426668   30855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52308 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/multinode-20220629111446-24356/id_rsa Username:docker}
	I0629 11:17:40.510546   30855 ssh_runner.go:195] Run: systemctl --version
	I0629 11:17:40.515077   30855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:17:40.524683   30855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220629111446-24356
	I0629 11:17:40.603536   30855 kubeconfig.go:92] found "multinode-20220629111446-24356" server: "https://127.0.0.1:52312"
	I0629 11:17:40.603561   30855 api_server.go:165] Checking apiserver status ...
	I0629 11:17:40.603600   30855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0629 11:17:40.613233   30855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1679/cgroup
	W0629 11:17:40.620803   30855 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1679/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0629 11:17:40.620833   30855 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52312/healthz ...
	I0629 11:17:40.626082   30855 api_server.go:266] https://127.0.0.1:52312/healthz returned 200:
	ok
	I0629 11:17:40.626094   30855 status.go:419] multinode-20220629111446-24356 apiserver status = Running (err=<nil>)
	I0629 11:17:40.626103   30855 status.go:255] multinode-20220629111446-24356 status: &{Name:multinode-20220629111446-24356 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0629 11:17:40.626116   30855 status.go:253] checking status of multinode-20220629111446-24356-m02 ...
	I0629 11:17:40.626335   30855 cli_runner.go:164] Run: docker container inspect multinode-20220629111446-24356-m02 --format={{.State.Status}}
	I0629 11:17:40.696350   30855 status.go:328] multinode-20220629111446-24356-m02 host status = "Running" (err=<nil>)
	I0629 11:17:40.696372   30855 host.go:66] Checking if "multinode-20220629111446-24356-m02" exists ...
	I0629 11:17:40.696635   30855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220629111446-24356-m02
	I0629 11:17:40.767241   30855 host.go:66] Checking if "multinode-20220629111446-24356-m02" exists ...
	I0629 11:17:40.767492   30855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0629 11:17:40.767533   30855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220629111446-24356-m02
	I0629 11:17:40.837789   30855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52442 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/machines/multinode-20220629111446-24356-m02/id_rsa Username:docker}
	I0629 11:17:40.922858   30855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0629 11:17:40.932318   30855 status.go:255] multinode-20220629111446-24356-m02 status: &{Name:multinode-20220629111446-24356-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0629 11:17:40.932338   30855 status.go:253] checking status of multinode-20220629111446-24356-m03 ...
	I0629 11:17:40.932573   30855 cli_runner.go:164] Run: docker container inspect multinode-20220629111446-24356-m03 --format={{.State.Status}}
	I0629 11:17:41.003495   30855 status.go:328] multinode-20220629111446-24356-m03 host status = "Stopped" (err=<nil>)
	I0629 11:17:41.003513   30855 status.go:341] host is not running, skipping remaining checks
	I0629 11:17:41.003534   30855 status.go:255] multinode-20220629111446-24356-m03 status: &{Name:multinode-20220629111446-24356-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 node start m03 --alsologtostderr: (18.708160875s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status: (1.10483053s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (110.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220629111446-24356
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220629111446-24356
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220629111446-24356: (36.94770628s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220629111446-24356 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220629111446-24356 --wait=true -v=8 --alsologtostderr: (1m12.986684203s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220629111446-24356
--- PASS: TestMultiNode/serial/RestartKeepsNodes (110.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (18.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 node delete m03: (16.308281515s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:422: (dbg) Done: kubectl get nodes: (1.519971559s)
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (18.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 stop: (24.772076339s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status: exit status 7 (178.728232ms)

                                                
                                                
-- stdout --
	multinode-20220629111446-24356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220629111446-24356-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr: exit status 7 (177.712509ms)

                                                
                                                
-- stdout --
	multinode-20220629111446-24356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220629111446-24356-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0629 11:20:34.680706   31531 out.go:296] Setting OutFile to fd 1 ...
	I0629 11:20:34.680919   31531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:20:34.680924   31531 out.go:309] Setting ErrFile to fd 2...
	I0629 11:20:34.680928   31531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0629 11:20:34.681032   31531 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/bin
	I0629 11:20:34.681201   31531 out.go:303] Setting JSON to false
	I0629 11:20:34.681215   31531 mustload.go:65] Loading cluster: multinode-20220629111446-24356
	I0629 11:20:34.681509   31531 config.go:178] Loaded profile config "multinode-20220629111446-24356": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.2
	I0629 11:20:34.681520   31531 status.go:253] checking status of multinode-20220629111446-24356 ...
	I0629 11:20:34.681871   31531 cli_runner.go:164] Run: docker container inspect multinode-20220629111446-24356 --format={{.State.Status}}
	I0629 11:20:34.745515   31531 status.go:328] multinode-20220629111446-24356 host status = "Stopped" (err=<nil>)
	I0629 11:20:34.745545   31531 status.go:341] host is not running, skipping remaining checks
	I0629 11:20:34.745553   31531 status.go:255] multinode-20220629111446-24356 status: &{Name:multinode-20220629111446-24356 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0629 11:20:34.745578   31531 status.go:253] checking status of multinode-20220629111446-24356-m02 ...
	I0629 11:20:34.745856   31531 cli_runner.go:164] Run: docker container inspect multinode-20220629111446-24356-m02 --format={{.State.Status}}
	I0629 11:20:34.809351   31531 status.go:328] multinode-20220629111446-24356-m02 host status = "Stopped" (err=<nil>)
	I0629 11:20:34.809370   31531 status.go:341] host is not running, skipping remaining checks
	I0629 11:20:34.809378   31531 status.go:255] multinode-20220629111446-24356-m02 status: &{Name:multinode-20220629111446-24356-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220629111446-24356 --wait=true -v=8 --alsologtostderr --driver=docker 
E0629 11:20:58.463851   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:21:07.697120   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220629111446-24356 --wait=true -v=8 --alsologtostderr --driver=docker : (55.20536256s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220629111446-24356 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.504189378s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220629111446-24356
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220629111446-24356-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220629111446-24356-m02 --driver=docker : exit status 14 (398.007093ms)

                                                
                                                
-- stdout --
	* [multinode-20220629111446-24356-m02] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220629111446-24356-m02' is duplicated with machine name 'multinode-20220629111446-24356-m02' in profile 'multinode-20220629111446-24356'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220629111446-24356-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220629111446-24356-m03 --driver=docker : (30.316843225s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220629111446-24356
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220629111446-24356: exit status 80 (533.128427ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220629111446-24356
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220629111446-24356-m03 already exists in multinode-20220629111446-24356-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220629111446-24356-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220629111446-24356-m03: (2.762714927s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.06s)

                                                
                                    
x
+
TestScheduledStopUnix (104.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220629112642-24356 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220629112642-24356 --memory=2048 --driver=docker : (29.989648803s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220629112642-24356 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220629112642-24356 -n scheduled-stop-20220629112642-24356
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220629112642-24356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220629112642-24356 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220629112642-24356 -n scheduled-stop-20220629112642-24356
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220629112642-24356
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220629112642-24356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220629112642-24356
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220629112642-24356: exit status 7 (117.464902ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220629112642-24356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220629112642-24356 -n scheduled-stop-20220629112642-24356
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220629112642-24356 -n scheduled-stop-20220629112642-24356: exit status 7 (114.518546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220629112642-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220629112642-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220629112642-24356: (2.427241745s)
--- PASS: TestScheduledStopUnix (104.39s)

                                                
                                    
x
+
TestSkaffold (70s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1428617433 version
skaffold_test.go:63: skaffold version: v1.39.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220629112827-24356 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220629112827-24356 --memory=2600 --driver=docker : (32.024771396s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1428617433 run --minikube-profile skaffold-20220629112827-24356 --kube-context skaffold-20220629112827-24356 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1428617433 run --minikube-profile skaffold-20220629112827-24356 --kube-context skaffold-20220629112827-24356 --status-check=true --port-forward=false --interactive=false: (23.333770585s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-5875545496-kcbq5" [531563ab-33b2-4669-938a-4b67f10bad2a] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012305841s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-b9f7bb48-bfwgs" [50c14297-2c2c-4df7-abae-b5a4961a4e4e] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006987377s
helpers_test.go:175: Cleaning up "skaffold-20220629112827-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220629112827-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220629112827-24356: (2.998862875s)
--- PASS: TestSkaffold (70.00s)

                                                
                                    
x
+
TestInsufficientStorage (13.01s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220629112937-24356 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220629112937-24356 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.684306244s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"192bd581-6507-4aec-9ef6-54d529828aa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220629112937-24356] minikube v1.26.0 on Darwin 12.4","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12c568ea-cded-4796-b55f-4bb6919a4562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14420"}}
	{"specversion":"1.0","id":"19699a06-b38d-4c9a-bc38-ac476d5430bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig"}}
	{"specversion":"1.0","id":"a8aa6fff-c925-4b78-9ef6-80db8faba8fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"99108748-c410-44e9-ab94-097c126caea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27921911-5bff-49b7-8330-6a3e56d45be0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube"}}
	{"specversion":"1.0","id":"1a4cb068-8071-490a-9c52-436f750bfb73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"af98a218-9573-4c2d-a595-774b33ebc9b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e95b1e87-b113-4de4-85dc-7ab5f41d22a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"44dbdc9f-ce6d-44ec-bc2a-3d7584a09eaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"9fef5a0d-c2ad-4dc7-9ae6-f02d39e68945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220629112937-24356 in cluster insufficient-storage-20220629112937-24356","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc01ac26-91c9-4019-ae69-fa8787bf983d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"68bf1605-a240-46af-b3b4-e9c02e4563e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4105c540-9f83-4144-8eaf-63c2d2574ffa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220629112937-24356 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220629112937-24356 --output=json --layout=cluster: exit status 7 (424.612953ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220629112937-24356","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220629112937-24356","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:29:47.443826   33205 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220629112937-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220629112937-24356 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220629112937-24356 --output=json --layout=cluster: exit status 7 (421.906006ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220629112937-24356","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220629112937-24356","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0629 11:29:47.866738   33215 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220629112937-24356" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	E0629 11:29:47.874998   33215 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/insufficient-storage-20220629112937-24356/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220629112937-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220629112937-24356
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220629112937-24356: (2.479348184s)
--- PASS: TestInsufficientStorage (13.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14420
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2276022488/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2276022488/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2276022488/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2276022488/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.74s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14420
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1118933640/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1118933640/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1118933640/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1118933640/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220629113518-24356
E0629 11:36:07.732818   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220629113518-24356: (3.591798866s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                    
x
+
TestPause/serial/Start (44.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220629113612-24356 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220629113612-24356 --memory=2048 --install-addons=false --wait=all --driver=docker : (44.627631187s)
--- PASS: TestPause/serial/Start (44.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220629113612-24356 --alsologtostderr -v=1 --driver=docker 
E0629 11:37:08.181474   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220629113612-24356 --alsologtostderr -v=1 --driver=docker : (42.05722367s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220629113612-24356 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (395.004614ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220629113845-24356] minikube v1.26.0 on Darwin 12.4
	  - MINIKUBE_LOCATION=14420
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --driver=docker 
E0629 11:39:10.799308   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --driver=docker : (30.02636957s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220629113845-24356 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --no-kubernetes --driver=docker 
E0629 11:39:24.344144   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --no-kubernetes --driver=docker : (14.432171219s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220629113845-24356 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220629113845-24356 status -o json: exit status 2 (459.561642ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220629113845-24356","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220629113845-24356
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220629113845-24356: (2.554973931s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --no-kubernetes --driver=docker : (6.636522154s)
--- PASS: TestNoKubernetes/serial/Start (6.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220629113845-24356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220629113845-24356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (424.839882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220629113845-24356
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220629113845-24356: (1.675845129s)
--- PASS: TestNoKubernetes/serial/Stop (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220629113845-24356 --driver=docker : (4.439984357s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220629113845-24356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220629113845-24356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (455.882025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E0629 11:39:52.027709   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (53.752235315s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220629112950-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml: (1.571679327s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-756p6" [1627bd2c-970d-4410-9a77-1a6ea939cec3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-756p6" [1627bd2c-970d-4410-9a77-1a6ea939cec3] Running
E0629 11:40:58.507362   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.007812163s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220629112950-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.103092853s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (51.394793905s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-kx24p" [2c3ad518-4e18-4c1c-a3d8-0a7385c8464b] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.015223972s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220629112951-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml: (1.572473673s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-kblcq" [8f003baa-fc5b-49bc-b7f6-2b71d6d342b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-kblcq" [8f003baa-fc5b-49bc-b7f6-2b71d6d342b9] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.00994548s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220629112951-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (81.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m21.663773132s)
--- PASS: TestNetworkPlugins/group/cilium/Start (81.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-lclt2" [63ce7023-42ba-4fd0-a2d7-2fb636090d65] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014457621s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m17.636560873s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220629112951-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (15.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml: (2.520968027s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-8ctpm" [a8d72008-e15b-4134-8e3e-d508f043806b] Pending
helpers_test.go:342: "netcat-869c55b6dc-8ctpm" [a8d72008-e15b-4134-8e3e-d508f043806b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-8ctpm" [a8d72008-e15b-4134-8e3e-d508f043806b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 13.008453439s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (15.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220629112951-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (45.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E0629 11:44:24.348978   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220629112951-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (45.868581244s)
--- PASS: TestNetworkPlugins/group/false/Start (45.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220629112951-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml: (1.967691023s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-52twj" [22f9fa64-090c-46db-bc47-810f2112f41e] Pending
helpers_test.go:342: "netcat-869c55b6dc-52twj" [22f9fa64-090c-46db-bc47-810f2112f41e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-52twj" [22f9fa64-090c-46db-bc47-810f2112f41e] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.009458937s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-6c5mv" [d78c73aa-1826-4581-bdb8-9e84712a97cd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016578036s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220629112951-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.116642849s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220629112951-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context calico-20220629112951-24356 replace --force -f testdata/netcat-deployment.yaml: (1.67990851s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-jnpdj" [21814733-b5b6-47b7-aaee-effa7900d232] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-jnpdj" [21814733-b5b6-47b7-aaee-effa7900d232] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.007093152s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (51.252836804s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220629112951-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220629112951-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0629 11:45:46.996740   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:47.001885   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:47.012128   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:47.032632   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:47.073539   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:47.153656   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:47.314068   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:47.634213   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:48.276400   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:49.557295   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:52.117562   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:57.237968   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:45:58.601286   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:46:07.478629   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:46:07.834933   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (1m23.222566406s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220629112950-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context bridge-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml: (1.814665923s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-pqn74" [84d44298-8f07-4965-89bf-b9971e813c49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-pqn74" [84d44298-8f07-4965-89bf-b9971e813c49] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.007488725s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220629112950-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220629112950-24356 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (45.182346371s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220629112950-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml: (2.364520236s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-zflqz" [be042614-2b40-4b05-8381-ac045a708511] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0629 11:47:00.633260   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:00.638351   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:00.648693   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:00.670167   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:00.710394   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:00.790550   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:00.952196   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:01.272980   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:01.913298   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:03.194903   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
E0629 11:47:05.756169   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-zflqz" [be042614-2b40-4b05-8381-ac045a708511] Running
E0629 11:47:08.923115   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:47:10.876574   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.007779518s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220629112950-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220629112950-24356 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220629112950-24356 replace --force -f testdata/netcat-deployment.yaml: (2.033136541s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-n56p2" [d81f73ee-30b4-4462-87e4-31e3877e22f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0629 11:47:21.117874   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-n56p2" [d81f73ee-30b4-4462-87e4-31e3877e22f6] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.006979891s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220629112950-24356 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220629114832-24356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2
E0629 11:48:46.650852   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:46.656006   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:46.666181   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:46.686529   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:46.726644   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:46.806963   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:46.967151   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:47.288502   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:47.929619   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:49.210724   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:51.770956   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:48:56.891326   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:07.131910   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:24.445919   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:49:27.613255   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220629114832-24356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2: (58.514590997s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context no-preload-20220629114832-24356 create -f testdata/busybox.yaml: (1.612175007s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [634d68dd-bea9-48fe-b5eb-cbfe5782771a] Pending
helpers_test.go:342: "busybox" [634d68dd-bea9-48fe-b5eb-cbfe5782771a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [634d68dd-bea9-48fe-b5eb-cbfe5782771a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.0129382s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220629114832-24356 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 describe deploy/metrics-server -n kube-system
E0629 11:49:44.488401   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220629114832-24356 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220629114832-24356 --alsologtostderr -v=3: (12.542548777s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356: exit status 7 (117.680533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220629114832-24356 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220629114832-24356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2
E0629 11:49:59.602545   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:59.607702   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:59.617946   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:59.638885   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:59.679272   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:59.761200   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:49:59.921722   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:00.242013   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:00.882221   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:02.162787   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:04.724732   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:08.575801   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:08.758669   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:08.765012   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:08.775418   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:08.796510   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:08.836722   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:08.917340   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:09.116268   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:09.437034   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:09.845143   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:10.077333   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:11.358008   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:13.919012   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:19.091367   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:20.085590   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:29.332043   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:40.566389   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:41.685632   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:50:47.005295   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:50:47.495404   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:50:49.813050   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 11:50:58.611240   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 11:51:07.845791   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory
E0629 11:51:14.350874   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.357287   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.369505   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.391675   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.431874   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.513606   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.675369   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.723705   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:14.996519   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:15.638259   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:16.918637   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:19.478957   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:51:21.529554   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:51:24.599565   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220629114832-24356 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.2: (5m0.125140938s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220629114832-24356 -n no-preload-20220629114832-24356
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220629114717-24356 --alsologtostderr -v=3
E0629 11:52:59.285699   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220629114717-24356 --alsologtostderr -v=3: (1.635085113s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220629114717-24356 -n old-k8s-version-20220629114717-24356: exit status 7 (117.778263ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220629114717-24356 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-qmktl" [686867af-2f46-499f-a6b3-5322753bab16] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0629 11:54:59.613495   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 11:55:02.171897   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:55:08.767179   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-qmktl" [686867af-2f46-499f-a6b3-5322753bab16] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.012312367s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-qmktl" [686867af-2f46-499f-a6b3-5322753bab16] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007843215s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220629114832-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context no-preload-20220629114832-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.548728576s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220629114832-24356 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220629115611-24356 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2
E0629 11:56:14.360057   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:56:42.055841   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/bridge-20220629112950-24356/client.crt: no such file or directory
E0629 11:56:59.506283   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220629115611-24356 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2: (47.610787332s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 create -f testdata/busybox.yaml
E0629 11:57:00.651374   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kindnet-20220629112951-24356/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-20220629115611-24356 create -f testdata/busybox.yaml: (1.592406396s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2182d11d-b79c-46b1-9538-37317121cdc9] Pending
helpers_test.go:342: "busybox" [2182d11d-b79c-46b1-9538-37317121cdc9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2182d11d-b79c-46b1-9538-37317121cdc9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.015210797s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220629115611-24356 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220629115611-24356 --alsologtostderr -v=3
E0629 11:57:18.302042   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220629115611-24356 --alsologtostderr -v=3: (12.576697502s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356: exit status 7 (120.199579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220629115611-24356 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220629115611-24356 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2
E0629 11:57:27.201259   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/enable-default-cni-20220629112950-24356/client.crt: no such file or directory
E0629 11:57:46.017702   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/kubenet-20220629112950-24356/client.crt: no such file or directory
E0629 11:58:46.668669   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
E0629 11:59:24.464807   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/skaffold-20220629112827-24356/client.crt: no such file or directory
E0629 11:59:32.661840   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:32.668209   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:32.679953   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:32.700205   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:32.741350   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:32.822191   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:32.983713   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:33.304532   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:33.946752   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:35.251272   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:37.812245   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:42.934248   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:53.178598   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 11:59:59.624836   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 12:00:08.780301   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 12:00:13.661357   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 12:00:47.028045   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/auto-20220629112950-24356/client.crt: no such file or directory
E0629 12:00:54.623352   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
E0629 12:00:58.634597   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/addons-20220629105308-24356/client.crt: no such file or directory
E0629 12:01:07.866427   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220629115611-24356 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.2: (4m58.512180541s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220629115611-24356 -n embed-certs-20220629115611-24356
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-9qp4w" [2e8b31a8-de1f-45db-90b7-8d4b00453b5b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-9qp4w" [2e8b31a8-de1f-45db-90b7-8d4b00453b5b] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.014493589s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-9qp4w" [2e8b31a8-de1f-45db-90b7-8d4b00453b5b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008882906s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220629115611-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context embed-certs-20220629115611-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.812758903s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220629115611-24356 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (83.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220629120335-24356 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220629120335-24356 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2: (1m23.387040788s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (83.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 create -f testdata/busybox.yaml
E0629 12:04:59.636112   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/false-20220629112951-24356/client.crt: no such file or directory
E0629 12:05:00.394277   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/no-preload-20220629114832-24356/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) Done: kubectl --context default-k8s-different-port-20220629120335-24356 create -f testdata/busybox.yaml: (1.564947506s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f4bc5ae1-b359-47ca-824c-acb9935846ef] Pending
helpers_test.go:342: "busybox" [f4bc5ae1-b359-47ca-824c-acb9935846ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:342: "busybox" [f4bc5ae1-b359-47ca-824c-acb9935846ef] Running
E0629 12:05:08.790115   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/calico-20220629112951-24356/client.crt: no such file or directory
E0629 12:05:09.732437   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/cilium-20220629112951-24356/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.017353932s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220629120335-24356 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220629120335-24356 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220629120335-24356 --alsologtostderr -v=3: (12.589971984s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356: exit status 7 (117.652499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220629120335-24356 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (300.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220629120335-24356 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220629120335-24356 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.2: (4m59.730395704s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220629120335-24356 -n default-k8s-different-port-20220629120335-24356
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (300.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-q9lqr" [513c4ddc-31bf-4472-b555-4f007825f07f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-q9lqr" [513c4ddc-31bf-4472-b555-4f007825f07f] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.016093482s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-q9lqr" [513c4ddc-31bf-4472-b555-4f007825f07f] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006253111s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220629120335-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:291: (dbg) Done: kubectl --context default-k8s-different-port-20220629120335-24356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.872295032s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220629120335-24356 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220629121133-24356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220629121133-24356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2: (42.193869402s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220629121133-24356 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220629121133-24356 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220629121133-24356 --alsologtostderr -v=3: (12.658067335s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356: exit status 7 (120.159113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220629121133-24356 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220629121133-24356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2
E0629 12:12:30.952471   24356 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14420-23160-7d3b93abdd89ce8ebba3c81494e660414100c7c4/.minikube/profiles/functional-20220629105817-24356/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220629121133-24356 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.2: (18.951385606s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220629121133-24356 -n newest-cni-20220629121133-24356
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220629121133-24356 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    

Test skip (18/289)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 12.512906ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-v8pk9" [ed45345d-97e2-4db7-9cb4-e985e9c47e39] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009357115s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-vtk8h" [5a120e2c-8832-4332-b0a5-01014f81af9c] Running
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011570108s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220629105308-24356 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) Done: kubectl --context addons-20220629105308-24356 delete po -l run=registry-test --now: (3.015743909s)
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220629105308-24356 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220629105308-24356 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.845841451s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (20.90s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220629105308-24356 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220629105308-24356 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220629105308-24356 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [9d3ebeca-0e32-4e8d-b82c-d51e6370a84d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [9d3ebeca-0e32-4e8d-b82c-d51e6370a84d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007395331s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220629105308-24356 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.82s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220629105817-24356 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220629105817-24356 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-f779w" [3e42abce-6c6e-41ed-b9cb-0e55178ba52c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-578cdc45cb-f779w" [3e42abce-6c6e-41ed-b9cb-0e55178ba52c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006118204s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.10s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220629112950-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220629112950-24356
--- SKIP: TestNetworkPlugins/group/flannel (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220629112951-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220629112951-24356
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.61s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220629120335-24356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220629120335-24356
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard